Scott's Weblog The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view

Quick Reference to Common AWS CLI Commands

This post provides an extremely basic “quick reference” to some commonly-used AWS CLI commands. It’s not intended to be a deep dive, nor is it intended to serve as any sort of comprehensive reference (the AWS CLI docs nicely fill that need).

This post does make a couple of important assumptions:

  1. This post assumes you already have a basic understanding of the key AWS concepts and terminology, and therefore doesn’t provide any definitions or explanations of these concepts.

  2. This post assumes the AWS CLI is configured to output in JSON. (If you’re not familiar with JSON, see this introductory article.) If you’ve configured your AWS CLI installation to output in plain text, then you’ll need to adjust these commands accordingly.

I’ll update this post over time to add more “commonly-used” commands, since each reader’s definition of “commonly used” may be different based on the AWS services consumed.

To list SSH keypairs in your default region:

aws ec2 describe-key-pairs

To use jq to grab the name of the first SSH keypair returned:

aws ec2 describe-key-pairs | jq -r '.KeyPairs[0].KeyName'

To store the name of the first SSH keypair returned in a variable for use in later commands:

KEY_NAME=$(aws ec2 describe-key-pairs | jq -r '.KeyPairs[0].KeyName')

More information on the use of jq can be found in this article or via the jq homepage, so this post doesn’t go into any detail on the use of jq in conjunction with the AWS CLI. Additionally, for all remaining command examples, I’ll leave the assignment of output values into a variable as an exercise for the reader.

To grab a list of instance types in your region, I recommend referring to Rodney “Rodos” Haywood’s post on determining which instances are available in your region.

To list security groups in your default region:

aws ec2 describe-security-groups

To retrieve security groups from a specific region (us-west-1, for example):

aws --region us-west-1 describe-security-groups

To use the jp tool to grab the security group ID of the group named “internal-only”:

aws ec2 describe-security-groups | jp "SecurityGroups[?GroupName == 'internal-only'].GroupId"

The jp command was created by the same folks who work on the AWS CLI; see this site for more information.

To list the subnets in your default region:

aws ec2 describe-subnets

To grab the subnet ID for a subnet in a particular Availability Zone (AZ) in a region using jp:

aws ec2 describe-subnets | jp "Subnets[?AvailabilityZone == 'us-west-2b'].SubnetId"

To describe the Amazon Machine Images (AMIs) you could use to launch an instance:

aws ec2 describe-images

This command alone isn’t all that helpful; it returns too much information. Filtering the information is pretty much required in order to make it useful.

To filter the list of images by owner:

aws ec2 describe-instances --owners 099720109477

To use server-side filters to further restrict the information returned by the aws ec2 describe-images command (this example finds Ubuntu 14.04 “Trusty Tahr” AMIs in your default region):

aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*ubuntu-trusty-14.04*' Name=virtualization-type,Values=hvm

To combine server-side filters and a JMESPath query to further refine the information returned by the aws ec2 describe-images command (this example returns the latest Ubuntu 14.04 “Trusty Tahr” AMI in your default region):

aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*ubuntu-trusty-14.04*' Name=virtualization-type,Values=hvm --query 'sort_by(Images, &Name)[-1].ImageId'

Naturally, you can manipulate the filter values to find other types of AMIs. To find the latest CentOS Atomic Host AMI in your default region:

aws ec2 describe-images --owners 410186602215 --filter Name=name,Values="*CentOS Atomic*" --query 'sort_by(Images,&CreationDate)[-1].ImageId'

To find the latest CoreOS Container Linux AMI from the Stable channel (in your default region):

aws ec2 describe-images --filters Name=name,Values="*CoreOS-stable*" Name=virtualization-type,Values=hvm --query 'sort_by(Images,&CreationDate)[-1].ImageId'

Further variations on these commands for other AMIs is left as an exercise for the reader.

To launch an instance in your default region (assumes you’ve populated the necessary variables using other AWS CLI commands):

aws ec2 run-instances --image-id $IMAGE_ID --count 1 --instance-type t2.micro --key-name $KEY_NAME --security-group-ids $SEC_GRP_ID --subnet-id $SUBNET_ID

To list the instances in your default region:

aws ec2 describe-instances

To retrieve information about instances in your default region and use jq to return only the Instance ID and public IP address:

aws ec2 describe-instances | jq '.Reservations[].Instances[] | {instance: .InstanceId, publicip: .PublicIpAddress}'

To terminate one or more instances:

aws ec2 terminate-instances --instance-ids $INSTANCE_IDS

To remove a rule from a security group:

aws ec2 revoke-security-group-ingress --group-id $SEC_GROUP_ID --protocol <tcp|udp|icmp> --port <value> --cidr <value>

To add a rule to a security group:

aws ec2 authorize-security-group-ingress --group-id $SEC_GROUP_ID --protocol <tcp|udp|icmp> --port <value> --cidr <value>

To create an Elastic Container Service (ECS) cluster:

aws ecs create-cluster [default|--cluster-name <value>]

If you use a name other than “default”, you’ll need to be sure to add the --cluster <value> parameter to all other ECS commands. This guide assumes a name other than “default”.

To delete an ECS cluster:

aws ecs delete-cluster --cluster <value>

To add container instances (instances running ECS Agent) to a cluster:

aws ec2 run-instances --image-id $IMAGE_ID --count 3 --instance-type t2.medium --key-name $KEY_NAME --subnet-id $SUBNET_ID --security-group-ids $SEC_GROUP_ID --user-data file://user-data --iam-instance-profile ecsInstanceRole

The example above assumes you’ve already created the necessary IAM instance profile, and that the file user-data contains the necessary instructions to prep the instance to join the ECS cluster. Refer to the Amazon ECS documentation for more details.

To register a task definition (assumes the JSON filename referenced contains a valid ECS task definition):

aws ecs register-task-definition --cli-input-json file://<filename>.json --cluster <value>

To create a service:

aws ecs create-service --cluster <value> --service-name <value> --task-definition <family:task:revision> --desired-count 2

To scale down ECS services:

aws ecs update-service --cluster <value> --service <value> --desired-count 0

To delete a service:

aws ecs delete-service --cluster <value> --service <value>

To de-register container instances (instances running ECS Agent):

aws ecs deregister-container-instance --cluster <value> --container-instance $INSTANCE_IDS --force

If there are additional AWS CLI commands you think I should add here, feel free to hit me up on Twitter. Try to keep them as broadly applicable as possible, so that I can maximize the benefit to readers. Thanks!

Using ODrive for Cloud Storage on Linux

A few months ago, I stumbled across a service called ODrive (“Oh” Drive) that allows you to combine multiple cloud storage services together. Since that time, I’ve been experimenting with ODrive, testing it to see how well it works, if at all, with my Fedora Linux environment. In spite of very limited documentation, I think I’ve finally come to a point where I can share what I’ve learned.

Before I proceed any further, I do feel it is necessary to provide a couple of disclaimers. First, while I’m using ODrive myself, I’m not using their paid (premium) service, even though it offers quite a bit more functionality. Why? Maybe this is a “chicken-and-egg” scenario, but I have a really hard time paying for a premium service where Linux client functionality is very limited and the documentation is extraordinarily sparse. (ODrive, if you’re reading this: put some effort into your Linux support and your docs, and you’ll probably get more paying customers.) Second, I’m providing this information “as is”; use it at your own risk.

OK, with those disclaimers out of the way, let’s get into the content. For Linux users, this page is about the extent of ODrive’s documentation. While this information is sufficient to briefly/quickly test out the ODrive Sync Agent, it doesn’t provide any sort of documentation/recommendations for how to put ODrive to work. Based on the information on this one page and based on my own trial-and-error experience, here’s some additional information you may find helpful.

First, when downloading and unpacking the Sync Agent and ODrive CLI binaries, I’d recommend putting them in a directory in your home folder. Fedora, for example, already has ~/.local/binin the PATH, so that might be a good location (it’s what I’m using). It doesn’t make any sense (in my opinion) to put them somewhere else, because the agent must run in the same user context as the ODrive CLI binary in order for it to work. (More on that point in a second.)

Next, you’re probably going to want to have the ODrive Sync Agent run automatically in the background. Their documentation doesn’t describe this at all, and ODrive was mysteriously silent when I tried to reach them on social media for clarification/recommendations. I ended up using a systemd unit to have the Sync Agent run in the background. However, this has to be a “user-mode” systemd unit running in the context of your user account. On Fedora 25 (which runs systemd 231), that means using systemctl --user to manage the unit, and storing the unit in ~/.local/share/systemd/user. (I’m not a systemd expert, so there may be other locations that are supported.)

Here’s a sample systemd unit you could use:

[Unit]
Description=odrive Sync Agent daemon

[Service]
ExecStart=/path/to/your/home/dir/.local/bin/odriveagent

[Install]
WantedBy=default.target

Obviously, you’ll want to customize the ExecStart location with the correct location of the binaries on your particular system. Save this systemd unit as odriveagent.service (or similar) in the appropriate location for user-mode units on your distribution (on Fedora 25, that’s ~/.local/share/systemd/user). Then, run systemctl --user daemon-reload to reload the systemd units, and then systemctl --user start oddriveagent to actually start the Sync Agent. Of course, you’ll probably want to run systemctl --user enable odriveagent to have the ODrive Sync Agent unit start automatically when you log in.

Once this user-mode systemd unit is running, run odrive status to verify that the ODrive CLI binary is able to properly communicate with the Sync Agent, and then authenticate the ODrive Sync Agent using the instructions ODrive provides.

Another area that was very sparse and unclear in the documentation was around the use of odrive mount. The Sync Agent page only makes brief passing reference to creating a local directory and then using odrive mount. What wasn’t explained was the relationship between the various storage services that are mapped into ODrive and the “remote mount point” referenced when discussing odrive mount.

In ODrive’s web interface, you’ll connect ODrive to various storage services—Google Drive, Dropbox, OneDrive, Box, etc. The web interface doesn’t make clear that these storage services are essentially mapped in as “subdirectories” under the root of your ODrive. By default, these “subdirectories” are named according to the storage provider. So, you map Google Drive into your ODrive, and it will create a “subdirectory” in the ODrive web interface named “Google Drive.” Map OneDrive into ODrive, and you’ll get a “subdirectory” in the web interface named “OneDrive”. After working with ODrive for a little while, I also found that you can rename the “subdirectory” assigned to each storage service mapped into ODrive. (This, in my opinion, is a feature that should be made more clear.)

When you use odrive mount, you’re creating the equivalent of a filesystem mount point, mounting a remote cloud storage provider onto a local directory. The example that ODrive provides on their web site says to use this command:

odrive mount /path/to/local/dir /

This command mounts the “root” of the ODrive to a local path. Each of the storage services comes in as a subdirectory according to the name assigned (either by default or by you when you renamed it) found in the ODrive web interface. So, if you have Google Drive mapped in to ODrive and have named it GDrive, then you’ll see a GDrive folder under the mount point that represents your Google Drive.

If you’re like me, you’ll probably start thinking about wanting to use multiple ODrive mount points, so that you can map storage services into your local filesystem in more flexible ways. For example, to create provider-specific mounts you could do something like this:

odrive mount ~/Local-GDrive /GDrive
odrive mount ~/AmazonDrive /AmznDrv

This is neat, except for the fact that this is only a Premium feature. (You have to have a paid subscription.) This is not clear from their documentation; it’s only by trying it and failing will you discover this little nugget of information. After digging around for a while, I found a brief, unclear mention of this fact in their features comparison list. (By the way, I have no problem with ODrive charging for premium features—my complaint is the incredibly sparse documentation and lack of clarity.)

Thus, if you’re not paying for their service, you’re limited to a single mount point, and therefore the only use of an ODrive mount point that makes sense is to mount the root of the ODrive on your local filesystem.

So, you’ve got the binaries installed, a systemd agent running under your user account, you’ve authenticated your ODrive Sync Agent (per their instructions), and have an ODrive mount point configured. Now what? Well, since there’s no GUI whatsoever on Linux, it’s all CLI.

I’m a huge CLI fan (in case you hadn’t guessed), so normally a CLI-only solution wouldn’t be a problem. In fact, I’ve complained before about products that don’t offer a good CLI in addition to their GUI (see this post about switching to VirtualBox and this post about a CLI for Dropbox). In the case of ODrive on Linux, though, the CLI is hobbled. For example, there’s no support for recursion or wildcards in the Linux ODrive CLI.

The reason the lack of support for recursion or wildcards/filename globbing is such an issue is because the ODrive Sync Agent won’t, by default, automatically start syncing files to your local system from the cloud storage provider. In my case, I already had files in my OneDrive for Business (OD4B) account. ODrive will “see” the files and directories in OD4B, and will create placeholder files to represent them: a .cloud file for every file, and a .cloudf file for every directory. In order to actually sync these files and directories down to your system, you need to use the odrive command line against one of these placeholder files, like this:

odrive sync cloud-folder.cloudf

This will then sync this folder down to your local system, but not any files or subfolders in it. You’ll need to repeat this process for subfolders, or for individual files in the folder. Naturally, you can see why recursion (syncing an entire directory tree) or filename globbing (grabbing all files) would be quite useful. Lack of support for either of these features means that there is a fair amount of manual work needed when you’re adding a cloud storage service where there’s already content present.

The good news is that if you add a file or folder locally, then that file or folder is automatically synced up to the cloud storage service as expected. It’s only when content is added first on the cloud storage service that you have to manually use the odrive sync command to have it synchronize down to your Linux system.

There are some quasi-workarounds to this; you can, for example, write a small script that might make it easier to sync multiple files or folders from the command line. On Linux systems running GNOME (like my Fedora box), you can put scripts into ~/.local/share/nautilus/scripts and actually have them accessible via a right-click on a file or folder in the GUI file manager. (I have a prototype script like this written, but I still need to do some additional testing/debugging).

Summary

ODrive has the potential to be a very useful, particularly if you need to access services like OD4B where there is no native Linux client from the provider. However, its usefulness/usability is severely hampered by a lack of thorough documentation, no GUI functionality, and a hobbled CLI. That being said, if you can live with the current limitations, it may still win a place on your Linux system. (It has on mine.)

Feel free to hit me up on Twitter if you have questions, comments, or corrections about this post. Thank you for reading!

Manually Installing Azure CLI on Fedora 25

For various reasons that we don’t need to get into just yet, I’ve started exploring Microsoft Azure. Given that I’m a command-line interface (CLI) fan, and given that I use Fedora as my primary laptop operating system, this led me to installing the Azure CLI on my Fedora 25 system—and that, in turn, led to this blog post.

Some Background

First, some background. Microsoft has instructions for installing Azure CLI on Linux, but there are two problems with these instructions:

  1. Official packages that can be installed via a package manager are only provided for Ubuntu/Debian. Clearly, this leaves Fedora/CentOS/RHEL users out in the cold.

  2. Users of other Linux distributions are advised to use curl to download a script and pipe that script directly into Bash. (“Danger, Will Robinson!”) Clearly, this is not a security best practice, although I am glad that they didn’t recommend the use of sudo in the mix.

Now, if you dig into #2 a bit, you’ll find that the InstallAzureCli script you’re advised to download via curl really does nothing more than download a Python script named install.py. The install.py Python script really just uses pip and virtualenv to install the Azure CLI.

This left me wondering—why not just advise users to use virtualenv and pip directly, instead of writing a shell script that calls a Python script that calls virtualenv and pip? I posted a message on the Azure Forums to that effect; I’ll update this post when I learn more about the rationale.

Since Microsoft’s install script uses virtualenv and pip, I figured I’d just do that myself manually.

Manually Installing the Azure CLI

With that background in mind, here are the steps I followed to install the Azure CLI.

First, on Fedora 25:

  1. Make sure that the “gcc”, “libffi-devel”, “python-devel”, and “openssl-devel” packages are installed (use dnf to take care of this). On my primary system, these were already installed, so I used a clean Fedora 25 Cloud Base Vagrant image to test. Only these four packages are prerequisites.

  2. Install Pip using sudo dnf install python-pip.

  3. Once Pip is installed, install virtualenv with pip install virtualenv. (I did a sudo -H pip install virtualenv to make virtualenv available to all users on the system, but as far as I know that’s not required.)

  4. Create a new virtual environment with virtualenv azure-cli (feel free to use a different name).

  5. Activate the new virtual environment (typically accomplished by sourcing the <virtualenv>/bin/activate script).

  6. Install the Azure CLI with pip install azure-cli.

On macOS, the process is very similar:

  1. If Pip isn’t already installed, install it with sudo easy_install pip. (I’d already installed Pip on my macOS systems, so I didn’t need this step.)

  2. Use Pip to install virtualenv (with pip install virtualenv).

  3. Create a new virtual environment (virtualenv azure-cli).

  4. Activate the new virtual environment (source the activate script).

  5. Install the Azure CLI with pip install azure-cli.

Note that I did have the XCode command-line tools installed—in order to have git—so that may affect things. I tested this on both El Capitan (10.11.6) and Sierra (10.12.5), and the process was identical on both systems.

So there you have it: how to install the Azure CLI using virtualenv and pip on Fedora 25 and macOS. Stay tuned for more Azure-related posts later this year.

Technology Short Take #85

Welcome to Technology Short Take #85! This is my irregularly-published collection of links and articles from around the Internet related to the major data center technologies: networking, hardware, security, cloud computing, applications/OSes, storage, and virtualization. Plus, just for fun, I usually try to include a couple career-related links as well. Enjoy!

Networking

  • Want to install VMware NSX in your home lab, but concerned about resource utilization? Here’s a guide to deploying NSX in home labs with limited resources. (Although it’s not called out in the article, it’s important to note that this is not supported by VMware.)
  • Jeffrey Kusters has a write-up on how to integrate AWS and vRealize Network Insight 3.4. This allows you to visualize network activity on AWS using vRNI (as on-premises traffic, assuming you’ve configured vRNI appropriately).
  • Gareth Lewis walks through a vCNS 5.5.4 to VMware NSX 6.2.5 upgrade.
  • It’s a bit Ansible-skewed (as expected given it’s posted on the Ansible site), but networking pros looking for some perspectives on using Ansible for network automation might find this post useful.
  • PowerNSX in a container? Sure!
  • I know that I saw this article from Matt Oswalt before, but for some reason when I came across it again while putting this post together it just really resonated with me. Networking professionals, it’s time to realize your cheese moved a long time ago.

Servers/Hardware

Security

  • This post was a particularly interesting (to me, at least) examination of another aspect of IoT security; specifically, the ZigBee protocol.

Cloud Computing/Cloud Management

  • Soenke Ruempler has a good overview of why an AWS multi-account architecture is probably better once you start working in AWS at any real (beyond lab testing) scale. This is, in my opinion, a really good article that draws upon a number of different sources to make recommendations. Well worth reading.
  • This post by Ross Kukulinski describes shell autocompletion for kubectl. Handy!
  • I found this post on how Atlassian designed their Kubernetes infrastructure on AWS to be helpful, if for no other reason than as a real-life example of how an organization reconciles AWS design considerations with Kubernetes design considerations.

Operating Systems/Applications

  • Chances are you’ve heard of CRI, the Container Runtime Interface for Kubernetes (which is a way of standardizing how Kubernetes interacts with an underlying container runtime). CRI-O is the effort to implement CRI for OCI-compatible runtimes like runC. This post helps explain CRI-O in a bit more detail.
  • Looks like Microsoft’s Nano Server is headed in a new direction.
  • Ajeet Singh Raina has an overview of some new networking functionality available in the RC5 release of Docker 17.06. Raina specifically calls out MACVLAN driver support in Swarm mode (recall that I discussed MACVLAN here about 18 months ago) as an example of the new networking functionality, and proceeds to provide some examples on how to use MACVLAN networking with Docker on Google Cloud Platform. I found this last part interesting, because early testing on my part with MACVLAN on AWS didn’t work; I guess this is one of those differences between GCP and AWS.
  • Feeling geeky? Read this article on diving deep into Windows to figure out what was causing an erratic performance issue despite plenty of hardware resources.
  • If you’re interested in learning more about BPF/eBPF, check out these two posts (here and here) from Julia Evans. (I’m of the opinion that eBPF portends a major change in Linux networking, and given the prevalance of Linux in networking portends a change in the networking industry as a whole.)
  • I came across this article thanks to The New Stack on Twitter (good account to follow for “cloud-native” sorts of things): how software development tends to creep back towards the monolith. It’s a good read, and a good reminder that tools alone can’t replace discipline and rigor when it comes to building microservices-based architectures.

Storage

  • J Metz offered to send me a few links for storage, so I happily took him up on that offer. First up we have the announcement of the release of the NVMe 1.3 specification, which adds a number of new features and expands the suitability of the specification to more markets (like mobile devices). Next up is a brief review of M&A (merger and acquisition) activity in the storage industry in the first half of 2017. There are some well-known entries there (such as Nimble’s acquisition by HPE), but also some not-so-well-known ones (such as Double-Take’s acquisition by Carbonite). Thanks for the links, J! (By the way, all readers are more than welcome to send over links they feel might be useful to other readers. Don’t just send me press releases, please.)
  • WekaIO recently came out of stealth, touting their “cloud-scale storage software.” It looks interesting; check out the press release here.

Virtualization

Career/Soft Skills

  • Frank Denneman’s recent release of VMware vSphere 6.5 Host Resources Deep Dive led him to write about the core motivation for writing a book. As a fellow author, I can wholeheartedly endorse and support Frank’s statements in this article. If you’re thinking about writing a book, definitely read this post. As Frank says, I’m not saying don’t do it—I’m just advocating for being well-educated before you jump in.
  • I found Matt Klein’s article on why he will not start an Envoy platform company interesting, and I think there are lessons there that many of us can apply to our own career decisions/directions.

I think that’s enough for this time around; here’s hoping you found something useful and pertinent. As I mentioned above in the Storage section, if you have some links you’ve found useful that you’d like to share with me, I’d love to see them (and maybe they’ll make their way into a future Tech Short Take!). Thanks for reading, and have a great weekend!

Information on the Recent Site Migration

Earlier this week, I completed the migration of this site to an entirely new platform, marking the third or fourth platform migration for this site in its 12-year history. Prior to the migration, the site was generated using Jekyll and GitHub Pages following a previous migration in late 2014. Prior to that, I ran WordPress for about 9 years. So what is it running now?

The site is now generated using Hugo, an extraordinarily fast static site generator. I switched to Hugo because it offers a couple of key benefits over Jekyll:

  1. Site build times are 10x faster (less than 30 seconds with Hugo compared to over 5 minutes with Jekll)—this directly translates into me being able to test changes to the site much more quickly (Update: after some optimizations, site build times are down to less than 2 seconds!)
  2. Hugo is a single binary that’s easily installed on Linux or macOS (and Windows too, though I don’t have any Windows systems)

Hugo also gives me more flexibility that I had with Jekyll, such as generating lists of articles by tag or lists of articles by category. Along with those additions—the ability to browse by tag or category—I’ve also removed the pagination (I mean, who’s really going to page through 188 pages of posts?) and instead made the 50 most recent posts available directly via the home page. The full content of the first 5 are displayed, followed by excerpts/summaries of the next 15 and then links to the next 30 most recent articles. (I’d love to get your feedback on what you think about this arrangement—helpful, not helpful, etc.)

Hugo generates the site from the source code, which is version controlled using Git and stored on GitHub. However, the generated static HTML files are not served by GitHub.

Instead, the generated static HTML files are uploaded to Amazon S3. To streamline the process of uploading the static HTML files to S3, I’m leveraging a tool called s3deploy, which will only upload files that have actually changed (instead of uploading the entire site every time it is generated, which is what aws s3 sync would do).

The act of generating the site (using the hugo command) and uploading the changed files (using s3deploy) are bundled together in a Makefile that allows me to easily build the site and upload the changed files with a single command.

Although it’s certainly possible to serve a static site out of an S3 bucket, this site is instead served by the Amazon CloudFront content distribution network (CDN). Right now, the CloudFront distribution is limited to only using the US, Canada, and Europe endpoints until I get a better feel for utilization and cost. Assuming the costs aren’t too high, I can easily expand that to include CDN endpoints worldwide, allowing me to provide better latency to readers all around the globe. Path invalidation (to refresh the CDN cache) using aws cloudfront create-invalidation isn’t yet automated, but I’m working on that aspect.

Over the next few weeks, I’ll be fine-tuning the SEO of the site and making other adjustments to optimize the site for S3 and CloudFront. Those changes should be pretty much invisible to most readers.

Hopefully, this helps answer some questions about why I migrated the site (again), and what is currently in use behind the scenes to power the site. If anyone has additional questions, feel free to hit me up on Twitter. Thanks for reading!

Recent Posts

VMworld 2017 Prayer Time

At VMworld 2017 in Las Vegas, I’m organizing—as I have in previous years—a gathering of Christians for a brief time of prayer while at the conference. If you’re interested in joining us, here are the details.

Read more...

Ten Years of Spousetivities

A long time ago in a galaxy far, far away (OK, so it was 2008 and it was here in this galaxy—on this very planet, in fact), I posted an article about bringing your spouse to VMworld. That one post sparked a fire that, kindled by my wife’s passion and creativity, culminates this year in ten years of Spousetivities! Yes, Spousetivities is back at VMworld (both US and Europe) this year, and Crystal has some pretty nice events planned for this year’s participants.

Read more...

The Linux Migration: July 2017 Progress Report

I’m now roughly six months into using Linux as my primary laptop OS, and it’s been a few months since my last progress report. If you’re just now picking up this thread, I encourage you to go back and read my initial progress report, see which Linux distribution I selected, or check how I chose to handle corporate collaboration (see here, here, and here). In this post, I’ll share where things currently stand.

Read more...

Technology Short Take #84

Welcome to Technology Short Take #84! This episode is a bit late (sorry about that!), but I figured better late than never, right? OK, bring on the links!

Read more...

CentOS Atomic Host Customization Using cloud-init

Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init on a public cloud provider (AWS, in this case).

Read more...

Bastion Hosts and Custom SSH Configurations

The idea of an SSH bastion host is something I discussed here about 18 months ago. For the most part, it’s a pretty simple concept (yes, things can get quite complex in some situations, but I think these are largely corner cases). For the last few months, though, I’ve been trying to use an SSH bastion host and failing, and I could not figure out why it wouldn’t work. The answer, it turns out, lies in custom SSH configurations.

Read more...

Technology Short Take #83

Welcome to Technology Short Take #83! This is a slightly shorter TST than usual, which might be a nice break from the typical information overload. In any case, enjoy!

Read more...

Container Deployment Demos from Interop ITX

At Interop ITX 2017 in Las Vegas, I had the privilege to lead a half-day workshop on options for deploying containers to cloud providers. As part of that workshop, I gave four live demos of using different deployment options. Those demos—along with the slides I used for my presentation along the way—are now available to anyone who might like to try them on their own.

Read more...

Open vSwitch Day at OpenStack Summit 2017

This is a “liveblog” (not quite live, but you get the idea) of the Open vSwitch Open Source Day happening at the OpenStack Summit in Boston. Summaries of each of the presentations are included below.

Read more...

Liveblog: AT&T's Container Strategy and OpenStack's Role in it

This is a liveblog of the OpenStack Summit session titled “AT&T’s Container Strategy and OpenStack’s Role in it”. The speakers are Kandan Kathirvel and Amit Tank, both from AT&T. I really wanted to sit in on Martin Casado’s presentation next door (happening at the same time), but as much as I love watching/hearing Martin speak, I felt this like presentation might expose me to some new information.

Read more...

Liveblog: Deploying Containerized OpenStack: Challenges & Tools Comparison

This is a liveblog for an OpenStack Summit session on containerized OpenStack and a comparison of the tools used for containerized OpenStack. The speaker is Jaivish Kothari, from NEC Technologies. Two other speakers were listed on the title slide, but were apparently unable to make it to the Summit to present.

Read more...

Liveblog: Kuryr Project Update

This is a liveblog of an OpenStack Summit session providing an update on the Kuryr project. The speakers are Antoni Segura Puimedon and Irena Berezovsky. Kuryr, if you recall, was a project aimed at making OpenStack Neutron functionality available to Docker containers; it has since expanded to also offer Cinder and Manila storage to Docker containers, and has added support for both Docker Swarm and Kubernetes as well.

Read more...

Liveblog: OpenStack Summit Keynote, Day 2

This is a liveblog of the day 2 keynote of the OpenStack Summit in Boston, MA. (I wasn’t able to liveblog yesterday’s keynote due to a schedule conflict.) It looks as if today’s keynote will have an impressive collection of speakers from a variety of companies, and—judging from the number of laptops on the stage—should feature a number of demos (hopefully all live).

Read more...

Using a Makefile with Markdown Documents

It’s no secret that I’m a big fan of using Markdown (specifically, MultiMarkdown) for the vast majority of all the text-based content that I create. Over the last few years, I’ve created used various tools and created scripts to help “reduce the friction” involved with outputting Markdown source files into a variety of destination formats (HTML, RTF, or DOCX, for example). Recently, thanks to Cody Bunch, I was pointed toward the use of a Makefile to assist in this area. After a short period of experimentation, I’m finding that I really like this workflow, and I wanted to share some details here with my readers.

Read more...

Technology Short Take #82

Welcome to Technology Short Take #82! This issue is a bit behind schedule; I’ve been pretty heads-down on some projects. That work will come to fruition in a couple weeks, so I should be able to come up for some air soon. In the meantime, here’s a few links and articles for your reading pleasure.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!