Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Setting up Wireguard for AWS VPC Access

Seeking more streamlined access to AWS EC2 instances on private subnets, I recently implemented Wireguard for VPN access. Wireguard, if you’re not familiar, is a relatively new solution that is baked into recent Linux kernels. (There is also support for other OSes.) In this post, I’ll share what I learned in setting up Wireguard for VPN access to my AWS environments.

Since the configuration of the clients and the servers is largely the same (especially since both client and server are Linux), I haven’t separated out the two configurations. At a high level, the process looks like this:

  1. Installing any necessary packages/software
  2. Generating Wireguard private and public keys
  3. Modifying the AWS environment to allow Wireguard traffic
  4. Setting up the Wireguard interface(s)
  5. Activating the VPN

The first thing to do, naturally, is install the necessary software.

Installing Packages/Software

On recent versions of Linux—I’m using Fedora (32 and 33) and Ubuntu 20.04—kernel support for Wireguard ships with the distribution. All that’s needed is to install the necessary userspace tools.

On Fedora, that’s done with dnf install wireguard-tools. On Ubuntu, the command is apt install wireguard-tools. (You can also install the wireguard meta-package, if you’d prefer.) Apple’s macOS, for example, has a Wireguard app on the Mac App Store.

This page on the Wireguard site has full instructions for a variety of operating systems. macOS, for example, has an app in the App Store for Wireguard support.

Once the necessary Wireguard software is installed, then it’s time to start with the configuration of Wireguard. From here forward, I’ll focus only on Linux, as the instructions will vary fairly widely from OS to OS.

Generating Private and Public Keys

This step must be done on both sides of the connection. The installation of the “wireguard-tools” package provided a wg binary that you can use to generate the necessary keys. The steps below will generate a public and private key for you.

  1. Become root using sudo su -.
  2. Switch to the /etc/wireguard directory.
  3. Run wg genkey | tee privatekey | wg pubkey > publickey. This creates the public and private keys used by Wireguard.

With the public keys now generated, you’re ready to move on to setting up the Wireguard interfaces.

Modifying the AWS Environment

By default, Wireguard uses UDP port 51280 as the listening port for the Wireguard interface. If you want or need to use multiple Wireguard interfaces, you’ll need either separate network interfaces or use multiple ports. Modify the security group(s) to allow UDP port 51280 to the instance(s) that will have defined Wireguard interfaces.

Additionally, if you are going to route traffic through the VPN instance instead of masquerade traffic (use network address translation), then you’ll need to disable the source/destination check for the VPN instance. This can be accomplished fairly easily using the AWS CLI:

aws ec2 modify-instance-attribute --no-source-dest-check --instance-id <instance-id>

Setting up the Wireguard Interfaces

There are a couple different ways (at least) to set up the Wireguard interfaces. I’ll show you how to do it from the terminal with a configuration file (suitable for a headless server running in AWS) and how to do it from the GNOME user interface (an approach well-suited for a workstation being used to access resources in AWS).

Using a Configuration File from the CLI

Most of the Wireguard tutorials I saw focused only on this approach, so you’re likely to find other articles out there that share similar (or the same) information.

To set up a Wireguard interface using a configuration file from the CLI, create a wg<X>.conf file in /etc/wireguard, where <X> is the number of the interface. Typically you’d start with wg0 for the first VPN interface, but I’m not aware of any requirement to start with wg0. In this file, place the following contents:

PrivateKey = <private key for this machine>
Address = <IP address for Wireguard interface>
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51280

PublicKey = <public key for peer machine>
AllowedIPs = <IP address for peer Wireguard interface>, <additional CIDRs>
PersistentKeepalive = 25

There are a few notes I want to make about this configuration file:

  • With regard to the IP address: you’ll have to decide whether all your Wireguard peers will share a common subnet, or whether you’ll have separate interfaces (and therefore separate subnets) for each peer. There are pros and cons to each approach. I decided to go with a common subnet among peers.
  • If you used an interface name other than wg0, be sure to adjust the PostUp and PostDown lines accordingly. Note that this configuration uses NAT to make the VPN traffic appear to the rest of the VPC as if it’s coming from the VPN instance; this avoids the need for disabling the source/destination check or updating routing tables.
  • Because my client devices are behind a NAT, I included the PersistentKeepalive setting. You may not need this (but I suspect many people will).
  • With regard to the <additional CIDRs> notation above: if you want other IP addresses from the peer’s network to be able to route through this connection, specify those addresses/networks here. This is perhaps more important on the “client” side configuration, where you’re funneling all traffic for a VPC (or group of VPCs) through a single Wireguard node.
  • You’ll need a separate [Peer] section for each VPN peer. In my case, I had three different systems from which I wanted VPN access, so there needed to be three separate [Peer] sections.

Once the interface is configured, then you can activate the interface using wg-quick up wg0 (or whatever interface name you’re using).

Using the GNOME User Interface

Using the CLI to configure the Wireguard interface(s) on a server is acceptable, especially in a use case like mine (establishing connectivity to EC2 instances). From the desktop, however, users may prefer using a graphical tool instead of the CLI. In this section, I’ll show what it looks like to use the GNOME Network Connections applet to configure your Wireguard interface(s).

First, you’d run nm-connection-editor to launch the GNOME Network Connections applet, which would look something like this (you’d have different connections with different names, naturally):

GNOME Network Connections window

Clicking on the + symbol in the lower left corner of the window will bring up a dialog to select the type of connection to add:

New interface selection dialog

Selecting “Wireguard” from this dialog and clicking “Create…” brings up the Wireguard page for the new connection:

Wireguard properties page

On this page, you’ll need to supply the following bits of information:

  • An interface name (like wg0 or wg1)
  • The private key generated earlier
  • Check the “Add peer routes” to have the routing table updated with routes from this connection
  • Peer information

Click on “Add” under Peers to add a peer connection with this dialog box:

Wireguard peer properties

Here you’ll need to provide:

  • The public key for the peer that was generated earlier.
  • The IP addresses and address ranges that will be routed across this connection. As you can see in the screenshot above, for “Allowed IPs” you’ll want to not only specify the IP address of the peer Wireguard interface but also the IP range of the VPC behind the VPN gateway.
  • The endpoint (IP address and port) for the VPN gateway. As mentioned earlier, make sure this traffic is allowed through security groups, Network Access Control Lists, and other network traffic controls.
  • You can also set the persistent keepalive interval.

Click “Apply” to commit the changes to the peer configuration. You can use the “Add” button again to add additional properties.png

If you want the VPN connection to come up automatically, flip over to the General page and check “Connect automatically with priority”:

General properties page

The last thing to do is assign an IP address to the interface, which is done via the “IPv4 Settings” and/or the “IPv6 Settings” pages. Here’s the “IPv4 Settings” page:

IPv4 properties page

The IP address you assign needs to be on the same subnet as the IP address given to/specified for the peer. As I mentioned earlier, this could be a common subnet (like a /29 or similar) among all the Wireguard peers, or it could be a separate subnet for each peer. Fill in the other sections as needed.

Click “Save” whenever you’re finished, and your new Wireguard VPN connection should be good to go!

Activating the VPN

After the interfaces have been activated, the VPN connection(s) are automatically active. No additional steps are necessary to establish the VPN connections; the peer interfaces defined on each end automatically negotiate a connection among themselves. You should be able to almost immediately start accessing resources in the remote VPC.

I hope this write-up proves useful to someone out there. If you have any questions, or if you feel something I’ve written is incorrect or inaccurate, please contact me on Twitter. Thanks for reading!

Closing out the Tokyo Assignment

In late 2019, I announced that I would be temporarily relocating to Tokyo for a six-month assignment to build out a team focused on cloud-native services and offerings. A few months later, I was still in Colorado, and I explained what was happening in a status update on the Tokyo assignment. I’ve had a few folks ask me about it, so I thought I’d go ahead and share that the Tokyo assignment did not happen and will not happen.

So why didn’t it happen? In my March 2020 update, I mentioned that paperwork, approvals, and proper budget allocations had slowed down the assignment, but then the pandemic hit. Many folks, myself included, expected that the pandemic would work itself out, but—as we now clearly know—it did not. And as the pandemic dragged on (and continues to drag on), restrictions on travel and concerns over public health and safety continued to mean that the assignment was not going to happen. As many of you know all too well, travel restrictions still exist even today.

OK, but why won’t it happen in the future, when the pandemic is under control? At the time when the Tokyo assignment was offered to me, there were a set of reasons it made sense. The in-country team had no strong Kubernetes and cloud-native expertise, and wanted someone from the former Heptio team to come in and help bootstrap folks. There were business opportunities the in-country team wanted to pursue that would have been possible with the team I had been charged with building out. In reality, though, this was a time-bounded window of opportunity. The longer the pandemic continued and delayed the assignment, the more this time-bounded opportunity window shrank. In-country management lured away folks from competitors who had the requisite experience, and the team started bootstrapping itself. Business opportunities shifted. Strong team members from other parts of the organization and other parts of the world ended up relocating to nearby centers of growth (Singapore, notably). Now, more than a year later, the assignment just doesn’t make sense. It’s no longer needed.

I won’t lie—I’m more than a little sad that the assignment didn’t and won’t happen. Such is life, though; we shift and adapt as the world shifts and changes around us. Perhaps at some point in the future a similar opportunity will arise.

Technology Short Take 137

Welcome to Technology Short Take #137! I’ve got a wide range of topics for you this time around—eBPF, Falco, Snort, Kyverno, etcd, VMware Code Stream, and more. Hopefully one of these links will prove useful to you. Enjoy!



  • I recently mentioned on Twitter that I was considering building out a new Linux PC to replace my aging Mac Pro (it’s a 2012 model, so going on 9 years old). Joe Utter shared with me his new lab build information, and now I’m sharing it with all of you. Sharing is caring, you know.


Cloud Computing/Cloud Management

Operating Systems/Applications

  • Turns out that the apt-key command on Debian and Debian derivatives (like Ubuntu and its derivatives) has been deprecated. This article walks users through how to work with OpenPGP repository signing keys without the use of apt-key.
  • I recently watched this YouTube video series on tmux in order to get more familiar with this very popular tool. I can definitely see the value, but it’s going to take me some time to adjust my habits and workflows to take advantage of tmux.
  • Red Hat continues its effort to commodotize Docker’s position with developers: this time by taking aim at Docker Compose.



Career/Soft Skills

  • Lee Briggs shares a great post on learning to code with infrastructure as code (using infrastructure as code is something I think is a good career move for pretty much everyone). I like how Lee shares some very specific recommendations on how folks can get started.

while I’d love to keep going, I’d better wrap it up here. If you have any feedback for me, feel free to hit me on Twitter. I’d love to hear from you.

Technology Short Take 136

Welcome to Technology Short Take #136, the first Short Take of 2021! The content this time around seems to be a bit more security-focused, but I’ve still managed to include a few links in other areas. Here’s hoping you find something useful!



  • Thinking of buying an M1-powered Mac? You may find this list helpful.


Cloud Computing/Cloud Management

Operating Systems/Applications

Career/Soft Skills

That’s it this time around! If you have any questions, comments, or corrections, feel free to contact me. I’m a regular visitor to the Kubernetes Slack instance, or you can just hit me on Twitter. Thanks!

Using Velero to Protect Cluster API

Cluster API (also known as CAPI) is, as you may already know, an effort within the upstream Kubernetes community to apply Kubernetes-style APIs to cluster lifecycle management—in short, to use Kubernetes to manage the lifecycle of Kubernetes clusters. If you’re unfamiliar with CAPI, I’d encourage you to check out my introduction to Cluster API before proceeding. In this post, I’m going to show you how to use Velero (formerly Heptio Ark) to backup and restore Cluster API objects so as to protect your organization against an unrecoverable issue on your Cluster API management cluster.

To be honest, this process is so straightforward it almost doesn’t need to be explained. In general, the process for backing up the CAPI management cluster looks like this:

  1. Pause CAPI reconciliation on the management cluster.
  2. Back up the CAPI resources.
  3. Resume CAPI reconciliation.

In the event of catastrophic failure, the recovery process looks like this:

  1. Restore from backup onto another management cluster.
  2. Resume CAPI reconciliation.

Let’s look at these steps in a bit more detail.

Pausing and Resuming Reconciliation

The process for pausing and resuming reconciliation of CAPI resources is outlined in this separate blog post. To summarize that post here for convenience, the Cluster API spec includes a paused field that causes the Cluster API controllers to stop reconciliation when the field is set to true (and resume reconciliation when the field is false or absent). Setting this field allows you, the cluster operator, to pause or resume reconciliation.

Backing up CAPI Resources

Once you’ve paused reconciliation for Cluster API, you can then run a backup using Velero. Based on my testing, I didn’t see anything unusual or odd about running a backup; generally speaking, it looks to be as simple as velero backup create (with appropriate flags). Given the large number of custom resources used by Cluster API (Clusters, Machines, MachineDeployments, KubeadmConfigs, etc.) it may be challenging to include only Cluster API resources using Velero’s --include-resources functionality. It’s probably easier to either a) not use any of Velero’s filtering functionality and catch everything, or b) make sure you are either using namespaces or labels comprehensively for CAPI objects and then use Velero’s --include-namespaces and/or --selector filtering options for selecting things to be included in the backup. Refer to Velero’s resource filtering documentation for more details.

Restoring from Backup

As with creating the backup using Velero, restoring from the Velero backup follows the standard Velero procedures (i.e., run velero restore create with appropriate flags/options). Naturally, the cluster to which you are restoring should be an appropriately-configured Cluster API management cluster with the appropriate Cluster API components already installed.

Since this article is more focused on the “Oh no my management cluster is dead” scenario, all of the information on disaster recovery in the Velero docs is appropriate.

After the restore is complete, you’ll then want to resume reconciliation on the target/destination cluster, as outlined above.

Backup and Restore Versus Moving

The clusterctl utility used by CAPI for initializing management clusters (among other things) also has a move subcommand that can be used to move CAPI resources from one cluster to another cluster. Some readers may be wondering why we should bother with Velero, and if they could use clusterctl move instead.

clusterctl move is a viable option for moving CAPI objects between two clusters as long as both the source and target clusters are up and running. Using Velero, on the other hand, only requires that the source cluster is up and running when a backup needs to be taken; users can then restore this backup to another cluster even if the source cluster has completely failed. I’m also of the opinion that Velero will provide more fine-grained control over what can be backed up and restored, although I have yet to test that directly.

Additional Resources

Readers may find the following resources useful as well:

Disaster recovery use case with Velero

Cluster migration use case with Velero

I hope that readers find this article helpful. If there’s anything I’ve discussed here that you’d like to see examined/explained in greater detail, feel free to let me know. You can find me on the Kubernetes Slack instance, or find me on Twitter. I’d love to hear from you!

Recent Posts

Details on the New Desk Layout

Over the holiday break I made some time to work on my desk layout, something I’d been wanting to do for quite a while. I’d been wanting to “up my game,” so to speak, with regard to producing more content, including some video content. Inspired by—and heavily borrowing from—this YouTube video, I decided I wanted to create a similar arrangement for my desk. In this post, I’ll share more details on my setup.


Technology Short Take 135

Welcome to Technology Short Take #135! This will likely be the last Technology Short Take of 2020, so it’s a tad longer than usual. Sorry about that! You know me—I just want to make sure everyone has plenty of technical content to read during the holidays. And speaking of holidays…whatever holidays you do (or don’t) celebrate, I hope that the rest of the year is a good one for you. Now, on to the content!


Bootstrapping a Cluster API Management Cluster

Cluster API is, if you’re not already familiar, an effort to bring declarative Kubernetes-style APIs to Kubernetes cluster lifecycle management. (I encourage you to check out my introduction to Cluster API post if you’re new to Cluster API.) Given that it is using Kubernetes-style APIs to manage Kubernetes clusters, there must be a management cluster with the Cluster API components installed. But how does one establish that management cluster? This is a question I’ve seen pop up several times in the Kubernetes Slack community. In this post, I’ll walk you through one way of bootstrapping a Cluster API management cluster.


Some Site Updates

For the last three years, the site has been largely unchanged with regard to the structure and overall function even while I continue to work to provide quality technical content. However, time was beginning to take its toll, and some “under the hood” work was needed. Over the Thanksgiving holiday, I spent some time updating the site, and there are a few changes I wanted to mention.


Assigning Node Labels During Kubernetes Cluster Bootstrapping

Given that Kubernetes is a primary focus of my day-to-day work, I spend a fair amount of time in the Kubernetes Slack community, trying to answer questions from users and generally be helpful. Recently, someone asked about assigning node labels while bootstrapping a cluster with kubeadm. I answered the question, but afterward started thinking that it might be a good idea to also share that same information via a blog post—my thinking being that others who also had the same question aren’t likely to be able to find my answer on Slack, but would be more likely to find a published blog post. So, in this post, I’ll show how to assign node labels while bootstrapping a Kubernetes cluster.


Pausing Cluster API Reconciliation

Cluster API is a topic I’ve discussed here in a number of posts. If you’re not already familiar with Cluster API (also known as CAPI), I’d encourage you to check out my introductory post on Cluster API first; you can also visit the official Cluster API site for more details. In this short post, I’m going to show you how to pause the reconciliation of Cluster API cluster objects, a task that may be necessary for a variety of reasons (including backing up the Cluster API objects in your management cluster).


Technology Short Take 134

Welcome to Technology Short Take #134! I’m publishing a bit early this time due to the Thanksgiving holiday in the US. So, for all my US readers, here’s some content to peruse while enjoying some turkey (or whatever you’re having this year). For my international readers, here’s some content to peruse while enjoying dramatically lower volumes of e-mail because the US is on holiday. See, something for everyone!


Review: CPLAY2air Wireless CarPlay Adapter

In late September, I was given a CPLAY2air wireless CarPlay adapter as a gift. Neither of my vehicles support wireless CarPlay, and so I was looking forward to using the CPLAY2air device to enable the use of CarPlay without having to have my phone plugged into a cable. Here’s my feedback on the CPLAY2air device after about six weeks of use.


Resizing Windows to a Specific Size on macOS

I recently had a need (OK, maybe more a desire than a need) to set my browser window(s) on macOS to a specific size, like 1920x1080. I initially started looking at one of the many macOS window managers, but after reading lots of reviews and descriptions and still being unclear if any of these products did what I wanted, I decided to step back to using AppleScript to accomplish what I was seeking. In this post, I’ll share the solution (and the articles that helped me arrive at the solution).


Technology Short Take 133

Welcome to Technology Short Take #133! This time around, I have a collection of links featuring the new Raspberry Pi 400, some macOS security-related articles, information on AWS Nitro Enclaves and gVisor, and a few other topics. Enjoy!


Technology Short Take 132

Welcome to Technology Short Take #132! My list of links and articles from around the web seems to be a bit heavy on security-related topics this time. Still, there’s a decent collection of networking, cloud computing, and virtualization articles as well as a smattering of other topics for you to peruse. I hope you find something useful!


Considerations for using IaC with Cluster API

In other posts on this site, I’ve talked about both infrastructure-as-code (see my posts on Terraform or my posts on Pulumi) and somewhat separately I’ve talked about Cluster API (see my posts on Cluster API). And while I’ve discussed the idea of using existing AWS infrastructure with Cluster API, in this post I wanted to try to think about how these two technologies play together, and provide some considerations for using them together.


Technology Short Take 131

Welcome to Technology Short Take #131! I’m back with another collection of articles on various data center technologies. This time around the content is a tad heavy on the security side, but I’ve still managed to pull in articles on networking, cloud computing, applications, and some programming-related content. Here’s hoping you find something useful here!


Updating AWS Credentials in Cluster API

I’ve written a bit here and there about Cluster API (aka CAPI), mostly focusing on the Cluster API Provider for AWS (CAPA). If you’re not yet familiar with CAPI, have a look at my CAPI introduction or check the Introduction section of the CAPI site. Because CAPI interacts directly with infrastructure providers, it typically has to have some way of authenticating to those infrastructure providers. The AWS provider for Cluster API is no exception. In this post, I’ll show how to update the AWS credentials used by CAPA.


Behavior Changes in clusterawsadm 0.5.5

Late last week I needed to test some Kubernetes functionality, so I thought I’d spin up a test cluster really quick using Cluster API (CAPI). As often happens with fast-moving projects like Kubernetes and CAPI, my existing CAPI environment had gotten a little out of date. So I updated my environment, and along the way picked up an important change in the default behavior of the clusterawsadm tool used by the Cluster API Provider for AWS (CAPA). In this post, I’ll share more information on this change in default behavior and the impacts of that change.


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!