Scott's Weblog The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view

Some Tools to Help Learn Kubernetes

Kubernetes is emerging as the clear leader in the container orchestration space. This makes it an important technology to know and understand. However, like other distributed systems, learning something like Kubernetes can be challenging due to the effort involved in getting Kubernetes up and running. It’s not about learning to set up Kubernetes (although that comes in time); at first, it’s about understanding what Kubernetes does and how to use Kubernetes. In this post, I’ll share some tools to help learn what Kubernetes does and how to use Kubernetes.

Note that this post is not intended to be a comprehensive list of learning resources for Kubernetes. Also, this post is not focused on providing resources to help you learn to deploy Kubernetes. Instead, I’m focusing here on tools and services that let you get Kubernetes up and running quickly and easily so that you can focus on using Kubernetes (deploying applications and workloads onto Kubernetes). I’m sure there are many more tools/options than what I have listed here; these are just some that I have used and feel might be useful for others.

I’ll briefly cover the following tools and services:

  • Minikube
  • Kops
  • Kube-aws
  • Azure Container Service (ACS/AKS)

You’ll note I’m focused on command-line tools here, since I’m a CLI junkie. Let’s start with Minikube.

Using Minikube

Minikube is a tool to run a single-node Kubernetes cluster locally on your system, using one of a number of supported hypervisors (VirtualBox, VMware Fusion, Hyper-V, xhyve, or KVM). Because it deploys a single-node Kubernetes cluster, you’re limited in the types of things that you can do with the resulting Kubernetes cluster; however, it does offer the ability to work locally while most of the other tools require connectivity to a cloud provider (not unexpectedly, given the nature of what Kubernetes is and what it’s designed to do). Of course, the trade-off for working locally is that your Kubernetes cluster won’t be able to take advantage of cloud provider functionality, like services defined as type LoadBalancer.

(As an aside, since Minikube leverages libmachine—from Docker Machine—on the back-end, using both Minikube and Docker Machine with KVM on the same system has some interesting side effects. You’ve been warned.)

Despite the limitations, I think there’s probably still some value in using Minikube for certain situations.

Using Kops (Kubernetes Operations)

The kops tool can be used to (relatively) quickly and easily stand up Kubernetes clusters on AWS (support for other platforms is in the works). Typically, kops requires a DNS domain hosted in AWS Route 53 (although there are workarounds), and it does require an S3 bucket in which to store cluster/configuration state (no workaround for this). kops also allows you to create some pretty advanced configurations, such as HA deployments spread across multiple availability zones (AZs) and using SSH bastion hosts. I also like that you can use kops to generate Terraform configurations for setting up Kubernetes clusters.

However, you can also use it to set up quick and simple Kubernetes clusters for learning purposes. Here’s an example command-line that could be used to turn up a small Kubernetes cluster using kops:

kops create cluster \
--node-count 3 \
--zones us-west-2a \
--dns-zone route53domain.com \
--node-size t2.large \
--networking flannel \
--ssh-public-key /path/to/public/key.pub \
clustername.route53domain.com

I won’t go into all the details on using kops (though I might do another post on kops at some point in the future). Instead, I’ll refer you to the AWS guide in the kops repository for more information and details on using kops. There are also a number of blog posts about there; here’s one example, and here’s one from AWS themselves.

Using kube-aws

As the name implies, kube-aws is a command-line tool for deploying a Kubernetes cluster on AWS. Like kops, kube-aws needs a Route 53 domain and an S3 bucket for storing state/configuration information. kube-aws also requires a KMS key. On the back-end, kube-aws is leveraging CloudFormation to spin up the Kubernetes cluster.

As with the other tools, I won’t go into great detail on how to use kube-aws; instead, I’ll direct you to the documentation (which seems really good).

Using Azure Container Service (ACS/AKS)

Using the Azure CLI, it’s pretty easy to spin up a Kubernetes cluster. In fact, there are a couple of different ways to go about it.

You can use ACS (Azure Container Service) and specify Kubernetes as the orchestrator type (ACS supports other types of orchestrators as well):

az acs create --orchestrator-type=kubernetes \
--resource-group=my-grp --name=my-clus \
--ssh-key-value=/path/to/public/key

After a 5-7 minutes, the command should complete and you’ll be ready to roll with a Kubernetes cluster running in Azure. Use the az acs kubernetes get-credentials to configure kubectl to talk to the new cluster.

You can also use AKS (Azure Container Service, a newer offering that—as I understand it—supersedes/replaces ACS) to turn up a Kubernetes cluster (AKS only supports Kubernetes):

az aks create --resource-group=my-grp --name=my-clus \
--ssh-key-value /path/to/public/key

As with az acs create, after a few minutes you’ll have a shiny new Kubernetes cluster up and running. From there, you can use az aks upgrade to potentially upgrade your cluster, or az aks get-credentials to pull down a configuration that allows kubectl to interact with your new cluster.

(Note that there was an issue with some versions of the Azure CLI prior to 2.0.25 that caused az aks get-credentials to fail. Upgrading to 2.0.25 or later seems to address the problem. If you need to upgrade the Azure CLI and you installed via pip, I recommend using the --force-reinstall flag to make sure the upgrade completes successfully.)

Sebastien Goasguen also has a brief write-up of ACS/AKS (he also covers Azure Container Instances, ACI).

Why no mention of Google Kubernetes Engine (GKE)?

As a hosted/managed solution, GKE would fall into the same category as ACS/AKS. However, I didn’t include GKE here for the simple reason that I was focusing on CLI-based tools, and (in my opinion) Google makes it way too difficult to get the gcloud tool installed on your system. (Seriously—if you’re going to write it in Python, why not just use pip?) There is a Docker container available, but at nearly 1GB in size (last time I checked) I don’t feel that’s a very attractive option either.

If you don’t mind jumping through hoops to get gcloud installed (or if you’d prefer to use a GUI), then I’m sure GKE is a perfectly acceptable option.

There you have it…a few options for quickly and easily getting Kubernetes up and running (and you don’t need a massive home lab, either). Enjoy!

Technology Short Take 94

Welcome to Technology Short Take 94! Ready for another round of links, articles, and thoughts on data center technologies? (Who knows, maybe I’ll throw a rant or two in there.) OK, enough rambling…here’s the good stuff!

Networking

  • Amit Aneja has a two-part series (so far) explaining the routing architecture in NSX-T (which brings multi-hypervisor and multi-cloud support to the NSX platform). This is some good content and reminds me of the the old NVP/NSX content I generated back in the day. Ah, good times…anyway, check out Amit’s stuff here and here.
  • Sam McGeown has a nice diagram of the communications channels between the various VMware NSX components.
  • Roie Ben Haim has a post providing an introduction to NSX and Kubernetes.
  • Matt Oswalt tackles the idea of “intent-driven” or “intent-based” networking—all the rage right now—and outlines how something like this must interact with domains outside of networking in order to be effective. I particularly liked his (mini-)rant about how network automation can’t be only about making the network engineer’s life easier. Oh, snap!
  • I’m not really sure if this belongs in networking or not (how does one classify OS kernel-level work on networking and security?), but we’ll stick it here anyway: check out this blog post on cilium-health, a tool for troubleshooting cluster connectivity.
  • Here’s an interesting post on how Simon Metzger pulled together Salt, NAPALM, and Kubernetes for a proof-of-concept on using Salt (via NAPALM) to manage network devices.
  • It’s good to see my friend Brent Salisbury blogging again, this time writing about measuring bandwith using iperf and Docker.

Servers/Hardware

Nothing this time around (sorry!). I’ll stay alert for content that I might be able to include in the next Tech Short Take.

Security

  • If you’re using CloudFormation to manage your AWS security groups in an infrastructure-as-code approach, you owe it to yourself to check out this article by Jose Luis Ordiales.
  • Security is a many-faceted area, and one facet is appropriately protecting credentials. This article by Sjors Robroek shows one approach to appropriately protecting credentials when using automation tools.

Cloud Computing/Cloud Management

  • I guess we can talk automation in this section, right? Yasen Simeonov talks about the work that went into the NSX-T APIs (JSON-based, hooray!) and how to use the Python SDK for NSX-T. The content is a bit high-level, but given the medium (a blog post) it’s hard to get super-deep. It serves as a decent introduction, at least.
  • In the last Tech Short Take, I shared an article by Kynan Riley on node affinity. This time around, he’s talking about pod affinity.
  • I’m a 1Password user (just wish they’d release a “real” Linux client and bring back support for local vaults to all platforms), so seeing this article on how Agile Bits (the company behind 1Password) is using Terraform with AWS was pretty cool.
  • Kim Bottu walks readers through using Ravello to rebuild a (home) lab in about 20 minutes.
  • Christopher Berner has a detailed post on the lessons learned by scaling Kubernetes to 2,500 nodes.

Operating Systems/Applications

Storage

Virtualization

Career/Soft Skills

  • OK, so this isn’t really a “soft skill” per se, but it may prove useful nevertheless (and it didn’t fit anywhere else). Nick Janetakis writes about accessing documentation directly from your code editor using either Dash (macOS) or Zeal (open source, multiple platforms). I’ve been a Dash user for a while and I can attest to how handy it can be.

And that’s another Tech Short Take in the bag. I hope you found something useful here! As always, feel free to hit me up on Twitter if you have any feedback (or even if you just want to chat).

Running OVS on Fedora Atomic Host

In this post, I’d like to share the results of some testing I’ve been doing to run Open vSwitch (OVS) in containers on a container-optimized Linux distribution such as Atomic Host (Fedora Atomic Host, specifically). I’m still relatively early in my exploration of this topic, but I felt like sharing what I’ve found so far might be helpful to others, and might help spark conversations within the relevant communities about how this experience might be improved.

The reason for the use of Docker containers in this approach is twofold:

  1. Many of the newer container-optimized Linux distributions—CoreOS Container Linux (soon to be part of Red Hat in some fashion), Project Atomic, etc.—eschew “traditional” package management solutions in favor of containers.
  2. Part of the reason behind my testing was to help the OVS community better understand what it would look like to run OVS in containers so as to help make OVS a better citizen on container-optimized Linux distributions.

In this post, I’ll be using Fedora 27 Atomic Host (via Vagrant with VirtualBox). If you use a different version or release of Atomic Host, your results may differ somewhat. For the OVS containers, I’m using the excellent keldaio/ovs Docker containers.

As it turns out, running OVS on Fedora Atomic Host using the Kelda Docker images is really straightforward. As stated in Kelda’s README for the OVS Docker images, you just have to launch a couple of containers. First, you’d launch a container for the OVSDB server:

docker run -itd --net=host --name=ovsdb-server keldaio/ovs ovsdb-server

Next, you’d run a container for the ovs-vswitchd daemon:

docker run -itd --net=host --name=ovs-vswitchd --volumes-from=ovsdb-server --privileged keldaio/ovs ovs-vswitchd

That gets the core, essential parts of OVS up and running, but you’re not fully functional yet. The final ingredient is to load the OVS kernel module, which is part of the upstream Linux kernel. Normally, starting OVS-related daemons would initiate loading the kernel module, but with OVS encapsulated in Docker containers it won’t happen automatically. However, a quick sudo modprobe openvswitch will easily remedy that and get the OVS kernel module loaded.

From this point, running ovs-vsctl to configure OVS involves appending your command to a docker exec command, like this:

docker exec -it ovs-vswitchd ovs-vsctl show

This will run ovs-vsctl show in the “ovs-vswitchd” container, which is where the ovs-vswitchd daemon is running. All the standard ovs-vsctl commands should work here, such as adding a bridge (add-br), adding a port (add-port), deleting a port (del-port), and deleting a bridge (del-br).

This is all pretty cool, but there’s a problem. OVS’ configuration is stored in the OVSDB database, which is itself found in the “ovsdb-server” container (hence why the vswitchd container needs to mount the same volumes as the “ovsdb-server” container via --volumes-from). What if the “ovsdb-server” container goes away? The OVS configuration goes away, too. (Don’t believe me? Stop, remove, and relaunch the “ovsdb-server” container using the command line above and see for yourself.)

Obviously, that’s not ideal. One possible solution is to use Docker volumes to store the data that OVSDB needs separate from the actual OVSDB server container.

Here’s how you’d make this work. First, you’d create some Docker volumes:

docker volume create var-lib-ovs
docker volume create var-log-ovs
docker volume create var-run-ovs
docker volume create etc-ovs

Then, you’d amend the command line for launching the “ovsdb-server” container to look like this instead:

docker run -itd --net=host --name=ovsdb-server -v var-lib-ovs:/var/lib/openvswitch -v var-log-ovs:/var/log/openvswitch -v var-run-ovs:/var/run/openvswitch -v etc-ovs:/etc/openvswitch keldaio/ovs ovsdb-server

The command line for launching the ovs-vswitchd container remains unchanged, since it just mounts the same volumes as the OVSDB server container.

With this approach, the OVS configuration is now stored separate from the containers where the OVSDB server and ovs-vswitchd processes run, which means we can easily kill and restart those containers without negatively impacting OVS configuration. This configuration also brings us one step closer to using systemd to manage the OVS containers since we can now persist data across container instances.

I’m still exploring this sort of configuration, so I’ll share additional information I uncover in future blog posts. In the meantime, if I’ve made an error in this post or if there is a suggestion you’d like to make to improve the post, feel free to contact me on Twitter.

Using Docker Machine with Azure

I’ve written about using Docker Machine with a number of different providers, such as with AWS, with OpenStack, and even with a local KVM/Libvirt daemon. In this post, I’ll expand that series to show using Docker Machine with Azure. (This is a follow-up to my earlier post on experimenting with Azure.)

As with most of the other Docker Machine providers, using Docker Machine with Azure is reasonably straightforward. Run docker-machine create -d azure --help to get an idea of some of the parameters you can use when creating VMs on Azure using Docker Machine. A full list of the various parameters and options for the Azure drive is also available.

The only required parameter is --azure-subscription-id, which specifies your Azure subscription ID. If you don’t know this, or want to obtain it programmatically, you can use this Azure CLI command:

az account show --query "id" -o tsv

If you have more than one subscription, you’ll probably need to modify this command to filter it down to the specific subscription you want to use.

Additional parameters that you can supply include (but aren’t limited to):

  • Use the --azure-image parameter to specify the VM image you’d like to use. By default, the Azure driver uses Ubuntu 16.04.
  • By default, the Azure driver launches a Standard_A2 VM. If you’d like to use a different size, just supply the --azure-size parameter.
  • The --azure-location parameter lets you specify an Azure region other than the default, which is “westus”.
  • You can specify a non-default resource group (the default value is “docker-machine”) by using the --azure-resource-group parameter.
  • The Azure driver defaults to a username of “docker-user”; use the --azure-ssh-user to specify a different name.
  • You can customize networking configurations using the --azure-subnet-prefix, --azure-subnet, and --azure-vnet options. Default values for these options are 192.168.0.0/16, “docker-machine”, and “docker-machine”, respectively.

So what would a complete command look like? Using Bash command substitution to supply the Azure subscription ID, a sample command might look like this:

docker-machine create -d azure \
--azure-subscription-id $(az account show --query "id" -o tsv) \
--azure-location westus2 \
--azure-ssh-user ubuntu \
--azure-size "Standard_B1ms" \
dm-azure-test

This would create an Azure VM named “dm-azure-test”, based on the (default) Ubuntu 16.04 LTS image, in the “westus2” Azure region and using a username of “ubuntu”. Once the VM is running and responding across the network, Docker Machine will provision and configure Docker Engine on the VM.

Once the VM is up, all the same docker-machine commands are available:

  • docker-machine ls will list all configured machines (systems managed via Docker Machine); this is across all supported Docker Machine providers
  • docker-machine ssh <name> to establish an SSH connection to the VM
  • eval $(docker-machine env <name>) to establish a Docker configuration pointing to the remote VM (this would allow you to use a local Docker client to communicate with the remote Docker Engine instance)
  • docker-machine stop <name> stops the VM (which can be restarted using docker-machine start <name>, naturally)
  • docker-machine rm <name> deletes the VM

Clearly, there’s more available, but this should be enough to get most folks rolling.

If I’ve missed something (or gotten it incorrect), please hit me up on Twitter. I’ll happily make corrections where applicable.

An Update on Using Docker Machine with Vagrant

As part of a project on which I’m working, I’ve been spending some time working with Docker Machine and Vagrant over the last few days. You may recall that I first wrote about using these two tools together back in August 2015. As a result of spending some additional time with these tools—which I chose because I felt like they streamlined some work around this project—I’ve uncovered some additional information that I wanted to share with readers.

As a brief recap to the original article, I showed how you could use Vagrant to quickly and easily spin up a VM, then use Docker Machine’s generic driver to add it to Docker Machine, like this:

docker-machine create -d generic \
--generic-ssh-user vagrant \
--generic-ssh-key ~/.vagrant.d/insecure_private_key \
--generic-ip-address <IP address of VM> \
<name of VM>

This approach works fine if the Vagrant-created VM is reachable without port forwarding. What do I mean? In the past, the VMware provider for Vagrant used functionality in VMware Fusion or VMware Workstation to provide an RFC 1918-addressed network that had external access via network address translation (NAT). In Fusion, for example, this was the default “Share with my Mac” network. Thus, when you created a VM using Vagrant and the VMware provider for Vagrant, the first network interface card (NIC) in the VM would be assigned to this default NAT network and get an RFC 1918-style private IP address (on my Mac Pro, this network uses the 192.168.70.0/24 network range, but I don’t know if that’s the default for all systems). As a result, when you would run vagrant ssh-config for such a VM, you’d get something like this:

Host coreos-01
  HostName 192.168.70.132
  User core
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/slowe/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL

With version 5 of the VMware provider for Vagrant (which is required in order to use the latest versions of Fusion and Workstation), this behavior seems to have changed, although it’s not clear if this is intentional or not (there’s an open GitHub issue about this change). Sometimes Vagrant will report connection information as shown above, but other times it will report using a forwarded port on the loopback address:

Host coreos-01
  HostName 127.0.0.1
  User core
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/slowe/.vagrant.d/insecure_private_key
  IdentitiesOnly yes
  LogLevel FATAL

This occurs, by the way, even when an actual RFC 1918-style private IP address has, in fact, been assigned to the VM and is functioning normally.

This behavior is a big change from the previous behavior, but it does bring it in line with how Vagrant behaves when used with VirtualBox (which uses a forwarded port on the loopback address for connectivity). The Libvirt provider uses an approach similar to previous versions of the VMware provider (an RFC 1918-style private address instead of a forwarded port on the loopback address).

All of this brings us to the original matter at hand: using Docker Machine and Vagrant together. If you’re using a Vagrant provider that does not use a forwarded port on the loopback address, then the command I provided in the original article (and earlier in this article) will work just fine.

If, on the other hand, you’re using a provider that does use a forwarded port on the loopback address, then you’ll need to amend the command slightly:

docker-machine create -d generic \
--generic-ssh-user vagrant \
--generic-ssh-key ~/.vagrant.d/insecure_private_key \
--generic-ssh-port 2222 \
--generic-ip-address 127.0.0.1 \
<name of VM>

Naturally, you’ll need to replace 2222 with the SSH port reported by vagrant ssh-config, since it may change when you’re running multiple VMs under Vagrant. (You may also need to change the username specified via --generic-ssh-user, since some distributions—I’m looking at you, CoreOS Container Linux—use a different username.)

Additionally, you’ll also need to forward port 2376 (the port that Docker uses to communicate across the network) in your Vagrantfile by adding a snippet like this:

# Configure port forwarding to support remote access to Docker Engine
config.vm.network "forwarded_port", guest: 2376, host: 2376

With the forwarded port specified in the Vagrantfile and the updated docker-machine command that includes the correct SSH port, then you’ll be able to use Vagrant to provision/manage the VM(s) and use Docker Machine to provision/manage the Docker Engine(s).

If you have questions or corrections, feel free to drop me an email (my address isn’t terribly hard to find) or hit me up on Twitter.

Recent Posts

Technology Short Take 93

Welcome to Technology Short Take 93! Today I have another collection of data center technology links, articles, thoughts, and rants. Here’s hoping you find something useful!

Read more...

Experimenting with Azure

I’ve been experimenting with Microsoft Azure recently, and I thought it might be useful to share a quick post on using some of my favorite tools with Azure. I’ve found it useful to try to leverage existing tools whenever I can, and so as I’ve been experimenting with Azure I’ve been leveraging familiar tools like Docker Machine and Vagrant.

Read more...

Issue with VMware-Formatted Cumulus VX Vagrant Box

I recently had a need to revisit the use of Cumulus VX (the Cumulus Networks virtual appliance running Cumulus Linux) in a Vagrant environment, and I wanted to be sure to test what I was doing on multiple virtualization platforms. Via Vagrant Cloud, Cumulus distributes VirtualBox and Libvirt versions of Cumulus VX, and there is a slightly older version that also provides a VMware-formatted box. Unfortunately, there’s a simple error in the VMware-formatted box that prevents it from working. Here’s the fix.

Read more...

Technology Short Take 92

Welcome to Technology Short Take 92, the first Technology Short Take of 2018. This one was supposed to be the last Tech Short Take of 2017, but I didn’t get it published in time (I decided to spend time with my family instead—some things are just more important). In any case, hopefully the delay of one additional week hasn’t caused any undue stress—let’s jump right in!

Read more...

Looking Back: 2017 Project Report Card

As has become my custom for the past several years, I wanted to take a look at how well I fared on my 2017 project list. Normally I’d publish this before the end of 2017, but during this past holiday season I decided to more fully “unplug” and focus on the truly important things in life (like my family). So, here’s a look back at my 2017 projects and a report card on my progress (or lack thereof, in some cases).

Read more...

Installing XMind 8 on Fedora 27

XMind is a well-known cross-platform mind mapping application. Installing the latest version of XMind (version 8) on Linux is, unfortunately, more complicated than it should be. In this post, I’ll show how to get XMind 8 running on Fedora 27.

Read more...

Installing the VMware Horizon Client on Fedora 27

In this post, I’ll outline the steps necessary to install the VMware Horizon client for Linux on Fedora 27. Although VMware provides an “install bundle,” the bundle does not, unfortunately, address any of the prerequisites that are necessary in order for the Horizon client to work. Fortunately, some other folks shared their experiences, and building on their knowledge I was able to make it work. I hope that this post will, in turn, help others who may find themselves in the same situation.

Read more...

Using Vagrant with Azure

In this post, I’ll describe how to use Vagrant with Azure. You can consider this article an extension of some of my earlier Vagrant articles; namely, the posts on using Vagrant with AWS and using Vagrant with OpenStack. The theme across all these posts is examining how one might use Vagrant to simplify/streamline the consumption of resources from a provider using the familiar Vagrant workflow.

Read more...

Technology Short Take 91

Welcome to Technology Short Take 91! It’s been a bit longer than usual since the last Tech Short Take (partly due to the US Thanksgiving holiday, partly due to vacation time, and partly due to business travel), so apologies for that. Still, there’s a great collection of links and articles here for you, so dig in and enjoy.

Read more...

Installing the Azure CLI on Fedora 27

This post is a follow-up to a post from earlier this year on manually installing the Azure CLI on Fedora 25. I encourage you to refer back to that post for a bit of background. I’m writing this post because the procedure for manually installing the Azure CLI on Fedora 27 is slightly different than the procedure for Fedora 25.

Read more...

Using Vagrant with Libvirt on Fedora 27

In this post, I’m going to show you how to use Vagrant with Libvirt via the vagrant-libvirt provider when running on Fedora 27. Both Vagrant and Libvirt are topics I’ve covered more than a few times here on this site, but this is the first time I’ve discussed combining the two projects.

Read more...

AWS re:Invent 2017 Keynote with Andy Jassy

This is a liveblog of the re:Invent 2017 keynote with Andy Jassy, taking place on Wednesday at the Venetian. As fully expected given the long queues and massive crowds, even arriving an hour early to the keynote isn’t soon enough; there’s already a huge crowd gathered to make it into the venue. Fortunately, I did make it in and scored a reasonable seat from which to write this liveblog.

Read more...

Liveblog: Deep Dive on Amazon Elastic File System

This is a liveblog of the AWS re:Invent 2017 session titled “Deep Dive on Amazon Elastic File System (EFS).” The presenters are Edward Naim and Darryl Osborne, both with AWS. This is my last session of day 2 of re:Invent; thus far, most of my time has been spent in hands-on workshops with only a few breakout sessions today. EFS is a topic I’ve watched, but haven’t had time to really dig into, so I’m looking forward to this session.

Read more...

Liveblog: IPv6 in the Cloud - Protocol and Service Overview

This is a liveblog of an AWS re:Invent 2017 breakout session titled “IPv6 in the Cloud: Protocol and Service Overview.” The presenter’s name is Alan Halachmi, who is a Senior Manager of Solutions Architecture at AWS. As with so many of the other breakout sessions and workshops here at re:Invent this year, the queues to get into the session are long and it’s expected that the session will be completely full.

Read more...

A Sample Makefile for Creating Blog Articles

In October of this year, I published a blog post talking about a sample Makefile for publishing blog articles. That post focused on the use of make and a Makefile for automating the process of a publishing a blog post. This post is a companion to that post, and focuses on the use of a Makefile for automating the creation of blog posts.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!