Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Saying Goodbye to the Full Stack Journey

In January 2016, I published the first-ever episode of the Full Stack Journey podcast. In October 2023, the last-ever episode of the Full Stack Journey podcast was published. After almost seven years and 83 episodes, it was time to end my quirky, eclectic, and unusual podcast that explored career journeys alongside various technologies, products, and open source projects. In this post, I wanted to share a few thoughts about saying goodbye to the Full Stack Journey.

First and foremost, let me say that I really enjoyed being the host of the Full Stack Journey podcast—far more than I expected I would, if I’m honest. While I didn’t love the logistics of producing a podcast, I did love getting to talk with folks, hear their stories, and learn about new things. So, while part of me is thankful to have a little less work to do, another part—a larger part—is sad to see it end.

That being said, some of you are probably wondering why it ended. I mentioned that I didn’t enjoy the logistics of producing a podcast; specifically, I didn’t enjoy audio editing. Some folks like it, but I didn’t. It was truly a chore for me. That was why I joined the Packet Pushers podcast network—they offered to provide that logistical support in exchange for the opportunity to try to monetize the podcast. (And if I’m honest, I have mad respect for both Greg and Ethan. You might say I’m a bit of a fanboy. Any chance to be associated with them in some form was not to be missed.) The truth, though, is the podcast just didn’t resonate with sponsors. I get it; I do. It was a bit of an odd podcast. Not quite career-focused, not quite technology-focused, but something unusual that fell somewhere in between. When all was said and done, it wasn’t financially feasible for Packet Pushers to continue to support the podcast. I bear no ill will toward Greg, Ethan, or any of the rest of the Packet Pushers crew; they were, are, and will remain top notch in my book. If you ever get the opportunity to meet them, work with them, or be a guest on one of their podcasts, take the opportunity. You won’t regret it.

What’s next? I honestly don’t know. I’ve toyed with the idea of launching another podcast, and I’ve toyed with the idea of joining a podcast as a co-host. I’ve contemplated creating some video content, since that seems to be rather popular. I don’t know. If you have a suggestion for me to consider, feel free to hit me up; I’m around on a variety of social media channels and other community spots.

Regardless of what I decide to do, I will always look fondly on my time hosting Full Stack Journey. To those of you who listened to least one of the shows, thank you; to those of you who were guests, I couldn’t have done it without you—you’re the real stars of the show. Hosting Full Stack Journey was definitely a journey itself, and it’s one that I’m thankful I took.

Guest Post: Moving Secrets Where They Belong

by Simen A.W. Olsen

Pulumi recently shipped Pulumi ESC, which adds the “Environment” tab to Pulumi Cloud. For us at Bjerk, this means we can move secrets into a secrets manager like Google Secrets Manager. Let me show you how we did it!

We are already rotating secrets with our own CLI tool, which works fine, meaning we are getting notifications in our Slack channel—which everyone tends to ignore until something real breaks. If you are curious how we are handling it today, we are using our own NPM package that throws an exception if a secret has expired. To ensure everything works smoothly, we utilize a GitHub Actions workflow that is scheduled to run daily for drift checking.

The secrets are shared between stacks using StackReferences, which has served us well.

Improving security

One issue with our current setup is that we publicly store encrypted secrets in our repository. Previously, we’ve thought of using Google Secrets Manager with the GetSecret function. That comes with its own territory, such as permissions to the secret and managing those permissions—not to mention that we already use multiple secret managers/vaults.

Now, with Pulumi ESC, it’s time to pick this up again. Pulumi ESC enables us to compose environments, which means we can throw away the stack references. With Pulumi ESC, we can also expose configuration and secrets as environment variables by running esc run <environment> -- <command>; more on that below.

Let’s dig into how we did this! (Note I’m using pulumi env here, as support for Pulumi ESC is built into the pulumi CLI. There’s also a separate CLI tool, esc, that can be used with Pulumi ESC.)

pulumi env init bjerk/common

This command creates an empty environment. You can easily list all the environments you have in your account as such:

$ pulumi env ls

bjerk/common

Next, we wanted to wire up Google Secrets Manager. The nice thing is that we’re using Workload Identity federation—which I’m not going to go into right now—and that means we’ve given Pulumi Cloud access to create a short-lived access token to a service account.

Our basic example looks like this:

values:
  gcp:
      login:
        fn::open::gcp-login:
          project: 123456789
          oidc:
            workloadPoolId: esc321
            providerId: esc123
            serviceAccount: esc@on-bjerk.iam.gserviceaccount.com
      secrets:
        fn::open::gcp-secrets:
          login: ${gcp.login}
          access:
            bot-github-token:
              name: bot-github-token
  pulumiConfig:
       github:token: ${gcp.bot-github-token}

We are wiring up gcp with a service account, provider ID, and workload ID. This service account has access to our secrets, such as the one I’m referencing, bot-github-token. We are referencing this secret in pulumiConfig, which will be available to Pulumi stacks referencing this environment during pulumi up/preview/refresh/destroy.

To evaluate if our setup works, we can run:

$ pulumi env open bjerk/common

{
  "pulumiConfig": {
    "github": "not-a-secret"
  }
}

Jumping back to a Pulumi project, we can easily import this environment to our project by adding this to our Pulumi stack configuration file (Pulumi.<stack-name>.yaml).

environment:
 imports:
 - bjerk/common

Exposing environment variables

Using our example from above, we can also add the environmentVariable keyword to allow developers in our team to use secrets defined in environments.

values:
  // ...
  environmentVariables:
    GITHUB_TOKEN: ${gcp.bot-github-token}

A particularly nice feature is that we can access these environments with the env run command.

$ pulumi env run bjerk/common -- printenv
...
GITHUB_TOKEN=a-secret

What’s next?

We are eager to see how Pulumi ESC will evolve. It has already improved a lot of our secrets handling!

One thing I hope to see is a password manager as a provider. At Bjerk, we’ve learned that storing some secrets in a typical password manager is practical, such as the API keys that we cannot programmatically renew.

I imagine being able to hook up Bitwarden or 1Password in the future with something like this:

values:
  gcp:
  // ...
  
  bitwarden:
    login:
      fn::open::not-supported:bitwarden:
        url: https://vaulty.no
        access-token: not-a-secret
    secrets:
      fn::open::not-supported:bitwarden-secrets:
        login: ${bitwarden.login}
        access:
          pat-github-token: simenandre-github-token

  environmentVariables:
    GITHUB_TOKEN: ${bitwarden.pat-github-token}

  pulumiConfig:
       github:token: ${gcp.bot-github-token}

As I mentioned above, we can compose multiple providers in one environment. You can see an example here, where we are hooked up to GCP and Bitwarden.

To summarize, Pulumi ESC lets us move secrets to where they belong, enhancing our workflow’s security without complicating things for our team.

A profile picture of Simen Olsen, the author of this post

This article was written by Simen A. W. Olsen. Simen is the manager at Bjerk (a digital product developer agency), a member of the Puluminaries community champion program, and a long-time contributor to Pulumi. He is driven by creating an impact where people feel safe, fulfilled, and empowered to be their best.

Assigning Tags by Default on AWS with Pulumi

Appropriately tagging resources on AWS is an important part of effectively managing infrastructure resources for many organizations. As such, an infrastructure as code (IaC) solution for AWS must have the ability to ensure that resources are always created with the appropriate tags. (Note that this is subtly different from a policy mechanism that prevents resources from being created without the appropriate tags.) In this post, I’ll show you a couple of ways to assign tags by default when creating AWS resources with Pulumi. Code examples are provided in Golang.

There are at least two ways (perhaps more) of handling this with Pulumi:

  1. Adding the default tags to the stack configuration
  2. Adding the default tags to an explicit provider

Each approach has its advantages and disadvantages, so there isn’t—in my opinion, at least—a definitive “best way” to doing this. The best way for you will depend on your specific circumstances.

In both cases, the solution involves modifying the configuration of the resource provider Pulumi uses to provision AWS resources. Pulumi supports the notion of both default providers and explicit providers. The former are created automatically and are configured via the stack configuration. (In fact, using stack configuration is currently the only way to modify a default provider—see this GitHub issue for more details.) The latter are defined within your Pulumi program, and therefore can be programmatically configured.

With that information in mind, what this really comes down to is:

  1. Configuring the default provider via stack configuration to supply default tags
  2. Creating an explicit provider in your Pulumi program that has default tags configured

Let’s take a look at each approach.

Using the Stack Configuration

Using the stack configuration to supply default tags is easy and is language-independent; you just modify your Pulumi.<stack>.yaml to include the following (substituting your own keys and values, of course):

config:
  aws:defaultTags:
    tags:
      Environment: "testing"
      Owner: "slowe"
      Team: "DevRel"

When you run pulumi up to instantiate your resources, the default AWS provider will automatically add the tags specified in your configuration to taggable resources. That’s it.

The advantage of this approach is that it is easy. The disadvantage of this approach is that these values are “static,” meaning you can’t dynamically derive values here; these are set using pulumi config set before you run pulumi up. If you wanted to include the stack name in the set of default tags, you’d have to manually set that in the stack configuration for each stack (same goes for project name or Pulumi Cloud organization name, for example). The same goes for any information you’d like to dynamically include.

If you want the flexibility to dynamically include information, then you’re better off using an explicit provider.

Using an Explicit Provider

As I mentioned earlier, an explicit provider is one you create programmatically in your Pulumi program, like this (note that I’ve intentionally omitted some code for brevity):

explicitAwsProvider, err := aws.NewProvider(ctx, "explicit-aws-provider", ...)

Specifically, to create an explicit provider for AWS and specify default tags, it would look something like this in Go:

awsProviderTagged, err := aws.NewProvider(ctx, "aws-provider-tagged", &aws.ProviderArgs{
	Region: pulumi.String("us-west-2"),
	DefaultTags: &aws.ProviderDefaultTagsArgs{
		Tags: pulumi.StringMap{
			"Project":      pulumi.String(ctx.Project()),
			"Stack":        pulumi.String(ctx.Stack()),
			"Organization": pulumi.String(ctx.Organization()),
		},
	},
})
if err != nil {
	return err
}

The code above, in addition to providing a rough idea of the syntax to programmatically create an explicit AWS provider with default tags configured, also illustrates how to include dynamic values in the provider configuration. Here you can see using ctx.Project(), ctx.Stack(), and ctx.Organization() to programmatically use the project name, stack name, and organization name (Pulumi Cloud only) as default tags in the explicit AWS provider. This is a key advantage over the configuration-based approach, which doesn’t provide a mechanism for using dynamic values.

The drawback to using an explicit provider is that you then need to tell Pulumi to use this explicit provider when creating a resource. This snippet of Go code shows what that looks like:

// Create an S3 bucket; this one will not have default tags
untaggedBucket, err := s3.NewBucket(ctx, "untagged-bucket", nil)
if err != nil {
	return err
}

// Create another S3 bucket; this one will have default tags
taggedBucket, err := s3.NewBucket(ctx, "tagged-bucket", nil, pulumi.Provider(awsProviderTagged))
if err != nil {
	return err
}

The first resource doesn’t specify a provider, and will therefore use the default provider. Assuming you have not specified aws:defaultTags in the stack configuration, this bucket will end up having no tags (you can verify this by using aws s3api get-bucket-tagging, as described in the AWS CLI docs). The second resource does specify the explicit provider created earlier, and therefore will have those tags defined on the resource (again, you can verify using the AWS CLI).

Which approach is best? That’s really determined by your requirements. Further, it’s not really an “either-or” situation; you may want to use the configuration-based approach for the default provider and use an explicit provider (perhaps you need to create resources in two different AWS regions in the same Pulumi program). Each approach has some advantages and disadvantages:

  • Specifying aws:defaultTags in the stack configuration to configure the default provider is easy and language-independent. However, it lacks the ability to dynamically determine values to use.
  • Using an explicit provider provides greater flexibility, but it does create the added work of having to explicitly specify the provider (it’s called an explicit provider for a reason) when creating resources.

If you’re interested in using this functionality yourself, note that I tested the approaches described above on my macOS 13.5.2 system running Pulumi 3.76.1 and Go 1.21.0 with version 6.0.4 of the AWS provider. However, as far as I am aware, nothing that I’ve described in this post is specific to any of these versions.

I hope that you find this information helpful. Feel free to join the Pulumi Community Slack if you have additional questions about Pulumi; alternately, I am happy to do my best to help if you reach out to me directly (Twitter, Mastodon, Bluesky). Thanks for reading!

Technology Short Take 170

Welcome to Technology Short Take #170! I had originally intended to get this published before the long Labor Day weekend, but didn’t quite have it ready. So, here you go—here’s your latest collection of links from around the internet focused on data center and cloud-related technologies. I hope that you find something useful here.

Networking

Servers/Hardware

  • I must admit that I always wanted to have a Sun workstation, and I’ve had an interest in SunOS/Solaris for years (check out this link if you don’t believe me). So, it’s natural that this post on reliving the glory days of Sun workstations would catch my attention.

Security

  • Michael Tsai weighs in on the Microsoft signing key that was stolen, sharing several links with commentary on this matter.
  • Exploiting cloud VMs via a remote serial/console service? Yikes. Fortunately, this Microsoft Security Response Center article not only shows how to use Azure Serial Console to compromise sensitive information, but also shows how to detect exploitation activity. What about preventing it?
  • As detailed in this article, it turns out BitLocker can be bypassed—assuming physical access to the hardware—with a cheap logic analyzer.
  • Daniel Stenberg rails about everything that is wrong about CVEs.
  • Grafana recently had to rotate their GPG signing key.
  • Time to update your iOS, iPadOS, and macOS devices! A new zero-click, zero-day exploit was announced and Apple has released an update for all affected systems. More details on the exploit are available here.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I started being a fan of Basic Apple Guy a while ago, and I use some of his wallpapers on my Mac and my mobile devices. (I’m weird/obsessive like this, but I like using matching wallpapers across all my devices.) Anyway, he released a couple a while ago that I’m just getting around to sharing here: a revamped version of his revamped macOS Tiger, and a “parody” wallpaper for OS X Rancho Cucamonga (there’s a story there). What’s that—which wallpaper of his am I currently using? I’m currently using the Twilight variation of macOS Tiger Redux.
  • Howard Oakley talks about App Translocation (formerly known as Gatekeeper Path Randomization) in macOS. While I generally enjoy using macOS, sometimes the tight control that Cupertino exercises over the OS and its behaviors feels…constricting.
  • Jeff Geerling walks readers through testing the Coral TPU accelerator using Docker (in order to work around some Python library dependency issues).

Programming/Development

  • For what is perhaps an alternative viewpoint on the role of AI coding assistants, check out Rizèl Scarlett’s post on learning p5.js with GitHub Copilot. (Disclaimer: It’s my understanding that Rizèl works for GitHub as a developer advocate, so keep that in mind when reading the post.)
  • Troy Hunt provides some detail on how he fights API bots using Turnstile from Cloudflare. It’s a pretty interesting read; this is a Cloudflare feature I wasn’t really aware of.

Virtualization

That’s all for now! I’m always open to reader feedback, so if you have feeback for me feel free to contact me. My e-mail address is not terribly hard to find, and you can also use either Twitter, Mastodon, or Bluesky to contact me. I also tend to lurk in a number of Slack communities, so you’re welcome to contact me there as well. I’d love to hear from you!

Mac, iPad, or Both?

Both Jason Snell and John Gruber, both stalwarts in the Apple journalism world, have recently weighed in on this topic. Jason says he’s given up on the iPad-only travel dream; John says he keeps throwing his iPad in his bag when he travels, even if he never uses it. I have thoughts on this topic—as you might think, considering I decided to write about it! (Ah, but what device did I use to write?)

Jason kicks off the discussion with a review of his iPad travel usage, which until the arrival of Apple Silicon, was going along swimmingly. Now, with Apple Silicon-powered Macs, things are different:

In the battle between iPad and Mac, I’m a longtime member of Team Both—I use my Mac most of the day at my desk, but when I write elsewhere in the house or backyard, I switch to an iPad Pro in the Magic Keyboard case. And that iPad (in a regular case) is my primary computing device when I’m not in work mode…But here I sit at my mother’s dining room table, typing on a MacBook Air. Something has changed in my approach to travel, and I’m trying to understand just what it is and what it tells me about the trajectory of the iPad as a productivity tool.

John places himself squarely on “Team Mac”, but admits to wanting to use his iPad more:

But for me personally, I continue to find that I’m most productive when I spend my working time in front of my Mac…The reason this topic remains evergreen is that I want to use my iPad more. There’s something ineffable about it. It’s a thrill when I use my iPad to do something that an iPad is actually best at. I honestly think I’d be more productive if I owned no iPad at all, yet I keep trying to find ways to use it more.

Something else John says really resonates with me, though:

But I know I’m best off, productivity-wise, using my iPad basically as a single-tasking consumption device for long-form reading and video watching.

In long-past articles (see here and here), I describe how I classify many of the applications I use into different “use case” categories:

  • Consumption: These are the applications I use to gather (“consume”) information. These would be things like NetNewsWire, web browsers, chat/messaging apps, and the like.
  • Organization: These are the apps for organizing information. Mostly this categories resolves around organizing tasks/items/commitments.
  • Creation: As the name suggests, these are the apps for creating content.

Now, why am I tell you this? I find I am aligned with John—I find myself most productive when I use my iPad in the “consumption” category. It works well for allowing me to do long-form reading or watching videos. My iPad is semi-useful in the “organization” category, where my task management tooling works across both iPadOS and macOS (and iOS, but that’s not part of this discussion). I don’t, generally, find using the iPad helpful with content creation tasks; for me, that’s where the Mac shines.

I would say, then, that I identify with both Jason Snell and John Gruber:

  1. Like Jason, I do call myself a member of “Team Both,” although perhaps in a different way. Jason is—or wants to be—a member of “Team Both” for all tasks but in different contexts. I am member of “Team Both” for different tasks in all contexts.
  2. Like John, I find myself most productive using the iPad as a consumption-focused device.

This should clue you in on what device I used to write this. (I used my Mac.)

What about you? If you have both a Mac and an iPad, how do you decide which device to use when? Hit me on Twitter or on Mastodon and let me know your thoughts!

Recent Posts

Technology Short Take 169

Welcome to Technology Short Take #169! Prior to the recent Spousetivities post, it had been a few months since I posted on the site; life has been busy, and it hasn’t left much time for blogging. Hopefully things will settle down soon, but until then I’ll continue to do the best I can to share useful information with folks. Hopefully something I’ve included in this Technology Short Take proves to be useful to someone. OK, let’s get on to the content!

Read more...

Spousetivities Returns to VMware Explore 2023

After a lengthy hiatus—prompted by a pandemic and the suspension of in-person events as a result—Spousetivities returns to VMware Explore! VMware Explore, the event formerly known as VMworld, is happening in Las Vegas, NV, and Spousetivities will be there offering organized activities for spouses, partners, significant others, family, or friends traveling with conference attendees. Registration is already open!

Read more...

Technology Short Take 168

Welcome to Technology Short Take #168! Although this weekend is (in the US, at least) celebrated as Mother’s Day weekend—don’t forget to call or visit your mom!—I thought you all might want some light weekend reading. I’m here to help, after all. To that end, here’s the latest Technology Short Take, with links to a variety of articles in various disciplines. Enjoy!

Read more...

Technology Short Take 167

Welcome to Technology Short Take #167! This Technology Short Take is a tad shorter than the typical one; I’ve been busy recently and my intake volume of content has gone down, thus resulting in fewer links to share with all of you! I opted to go ahead and publish a shorter Technology Short Take instead of making everyone wait around for a longer one. In any case, here’s hoping that I’ve included something useful for you!

Read more...

Creating a Talos Linux Cluster on Azure with Pulumi

A little over a month ago I published a post on creating a Talos Linux cluster on AWS with Pulumi. Talos Linux is a re-thinking of your typical Linux distribution, custom-built for running Kubernetes. Talos Linux has no SSH access, no shell, and no console; instead, everything is managed via a gRPC API. This post is something of a “companion post” to the earlier AWS post; in this post, I’ll show you how to create a Talos Linux cluster on Azure with Pulumi.

Read more...

Technology Short Take 166

Welcome to Technology Short Take #166! I’ve been collecting links for the last few weeks, and now it’s time to share them with all of you. There are some familiar names in the links below, but also some newcomers—and I’m really excited to see that! I’m constantly on the lookout for new sources (if you have a site you think I should check out, hit me up—my contact info is at the bottom of this post!). But enough of that, let’s get on with the content. Enjoy!

Read more...

Creating a Talos Linux Cluster on AWS with Pulumi

Talos Linux is a Linux distribution purpose-built for running Kubernetes. The Talos web site describes Talos Linux as “secure, immutable, and minimal.” All system management is done via an API; there is no SSH access, no shell, and no console. In this post, I’ll share how to use Pulumi to automate the creation of a Talos Linux cluster on AWS.

Read more...

Technology Short Take 165

Welcome to Technology Short Take #165! Over the last few weeks, I’ve been collecting articles I wanted to share with readers on major areas in technology: networking, security, storage, virtualization, cloud computing, and OSes/applications. This particular Technology Short Take is a tad heavy on cloud computing, but there’s a decent mix of other articles as well. Enjoy!

Read more...

Stage Manager is Incomplete

I’ve been using macOS Stage Manager off and on for a little while now. In Stage Manager, I can see the beginnings of what might be a very useful paradigm for desktop computing. Unfortunately, in its current incarnation, I believe Stage Manager is incomplete.

Read more...

Installing the Prerelease Pulumi Provider for Talos

Normally, installing a Pulumi provider is pretty easy; you run pulumi up and the provider gets installed automatically. Worst case scenario, you can install the provider using pulumi plugin install. However, sometimes things have to be done manually. In this post, I’ll walk you through installing a prerelease provider—in this case, the prerelease Pulumi provider for Talos Linux—using both pulumi plugin install as well as using a manual process.

Read more...

Automating Docker Contexts with Pulumi

Since I switched my primary workstation to an M1-based MacBook Pro (see my review here), I’ve starting using temporary AWS EC2 instances for compiling code, building Docker images, etc., instead of using laptop-local VMs. I had an older Mac Pro (running Fedora) here in my home office that formerly filled that role, but I’ve since given that to my son (he’s a young developer who wanted a development workstation). Besides, using EC2 instances has the benefit of access when I’m away from my home office. I use Pulumi to manage these instances, and I extended my Pulumi code to also include managing local Docker contexts for me as well. In this post, I’ll share the solution I’m using.

Read more...

Technology Short Take 164

Welcome to Technology Short Take #164! I’ve got another collection of links to articles on networking, security, cloud, programming, and career development—hopefully you find something useful!

Read more...

Technology Short Take 163

Welcome to Technology Short Take #163, the first of 2023! If you’re new to this site, the Technology Short Takes are essentially “link lists”—I collect links and articles about various technologies and I share them about every 3-4 weeks (sometimes more frequently). I’ll often add a bit of commentary here and there, but the real focus is the information in the linked articles. But enough of this, let’s get on with it! Here’s hoping you find something useful here.

Read more...

A Depth Year in 2023

Off and on for a number of years, I published a “projects for the coming year” post and a “report card for last year’s projects” post (you can find links to all of these here). Typically, the project list was composed of new things I would learn and/or new things I would create or do. While there’s nothing wrong with this sort of thing—not at all!—I came across an idea while reading that I’ve decided I’ll adopt for 2023: a depth year.

Read more...

Technology Short Take 162

Welcome to Technology Short Take #162! It’s taken me a bit longer than I would have liked to get this post assembled, but it’s finally here. Hopefully I’ve managed to find something you’ll find useful! As usual, the links below are organized by technology area/discipline, and I’ve added a little bit of commentary to some of the links where it felt necessary. Enjoy!

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!