Scott's Weblog The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view

Installing the VMware Horizon Client on Fedora 27

In this post, I’ll outline the steps necessary to install the VMware Horizon client for Linux on Fedora 27. Although VMware provides an “install bundle,” the bundle does not, unfortunately, address any of the prerequisites that are necessary in order for the Horizon client to work. Fortunately, some other folks shared their experiences, and building on their knowledge I was able to make it work. I hope that this post will, in turn, help others who may find themselves in the same situation.

Based on information found here and here, I took the following steps before attempting to install the VMware Horizon client for Linux:

  1. First, I installed the libpng12 package using sudo dnf install libpng12.

  2. I then created a symbolic link for the library that the Horizon client requires:

    sudo ln -s /usr/lib64/ /usr/lib64/
  3. I created a symbolic link for the library the Horizon client expects to have available:

    sudo ln -s /usr/lib64/ /usr/lib64/

With these packages and symbolic links in place, I proceeded to install the VMware Horizon client using the install bundle downloaded from the public VMware web site (for version 4.6.0 of the client). Per the guidelines in this GitHub gist, I deselected most all of the options (Smart Card, Real-Time Audio-Video, and Multimedia Redirection were all deselected; if I recall correctly, only Virtual Printing, Client Drive Redirection, and USB Redirection were left selected). The installation proceeded without incident, and the scan at the end reported success on all fronts.

Once the installation was complete, I was able to launch the Horizon client and proceed without further issues.

Here’s hoping this information helps others who may be looking to use the Horizon Client for Linux. If you have questions, feel free to hit me up on Twitter.

Using Vagrant with Azure

In this post, I’ll describe how to use Vagrant with Azure. You can consider this article an extension of some of my earlier Vagrant articles; namely, the posts on using Vagrant with AWS and using Vagrant with OpenStack. The theme across all these posts is examining how one might use Vagrant to simplify/streamline the consumption of resources from a provider using the familiar Vagrant workflow.

If you aren’t already familiar with Vagrant, I’d highly recommend first taking a look at my introduction to Vagrant, which provides an overview of the tool and how it’s used.


Naturally, you’ll need to first ensure that you have Vagrant installed. This is really well-documented already, so I won’t go over it here. Next, you’ll need to install the Azure provider for Vagrant, which you can handle using this command:

vagrant plugin install vagrant-azure

You’ll also (generally) want to have the Azure CLI installed. (You’ll need it for a one-time configuration task I’ll mention shortly.) I’ve published a couple posts on installing the Azure CLI; see here or here.

Once you’ve installed the vagrant-azure plugin and the Azure CLI, you’ll next need to install a box that Vagrant can use. Here, the use of Vagrant with Azure is different than the use of Vagrant with a provider like VirtualBox or VMware Fusion/VMware Workstation. Like when using Vagrant with AWS, when you’re using Vagrant with Azure the box is a “dummy box” that doesn’t really do anything; instead, you’re relying on Azure VM images. You can install a “dummy box” for Azure with this command:

vagrant box add azure-dummy --provider azure

You’ll then reference this “dummy box” in your Vagrantfile, as I’ll illustrate shortly.

The last and final step is to create an Azure Active Directory (AD) service principal for Vagrant to use when connecting to Azure. This command will create a service principal for you to use with Vagrant:

az ad sp create-for-rbac

Make note of the values returned in the command’s JSON response; you’ll need them later. You’ll also want to know your Azure subscription ID, which you can obtain by running az account list --query '[?isDefault].id' -o tsv.

Now that you’ve installed Vagrant, installed the vagrant-azure plugin, downloaded the dummy Azure box, and have created the Azure AD service principal, you’re ready to start spawning some Azure VMs with Vagrant.

Launching Azure VMs via Vagrant

When I use Vagrant, I prefer to keep the Vagrant configuration (stored in the file named Vagrantfile) as clean as possible, and separate details into a separate data file (typically using YAML). I outlined this approach here and here. For the purposes of this post, however, I’ll just embed the details directly into the Vagrant configuration to make it easier to understand. I have examples of using a YAML data file with Vagrant and Azure in the “Additional Resources” section below.

Here’s a snippet of a Vagrantfile you could use to spin up Azure VMs using Vagrant:

# Require the Azure provider plugin
require 'vagrant-azure'

# Create and configure the Azure VMs
Vagrant.configure('2') do |config|

  # Use dummy Azure box = 'azure-dummy'

  # Specify SSH key
  config.ssh.private_key_path = '~/.ssh/id_rsa'

  # Configure the Azure provider
  config.vm.provider 'azure' do |az, override|
    # Pull Azure AD service principal information from environment variables
    az.tenant_id = ENV['AZURE_TENANT_ID']
    az.client_id = ENV['AZURE_CLIENT_ID']
    az.client_secret = ENV['AZURE_CLIENT_SECRET']
    az.subscription_id = ENV['AZURE_SUBSCRIPTION_ID']

    # Specify VM parameters
    az.vm_name = 'aztest'
    az.vm_size = 'Standard_B1s'
    az.vm_image_urn = 'Canonical:UbuntuServer:16.04-LTS:latest'
    az.resource_group_name = 'vagrant'
  end # config.vm.provider 'azure'
end # Vagrant.configure

Naturally, this is a very generic configuration, so you’d need to supply the specific details you want use. In particular, you’d need to supply the following details:

  • The SSH keypair you want to use (via the config.ssh.private_key_path setting)
  • The Azure VM size you’d like to use (via the az.vm_size setting)
  • The Azure VM image you want to use (via the az.vm_image_urn setting)
  • The name of the Azure resource group you’d like to use (via the az.resource_group_name setting; this group should not already exist)

Also, you’ll note that the Vagrantfile assumes you’ve set some environment variables that will allow Vagrant to communicate with Azure. These values are taken from the JSON output of creating the Azure AD service principal:

  • The AZURE_TENANT_ID maps to the “tenant” key of the JSON output
  • The AZURE_CLIENT_ID maps to the “appID” key of the JSON output
  • The AZURE_CLIENT_SECRET maps to the “password” key of the JSON output
  • The AZURE_SUBSCRIPTION_ID is your Azure subscription ID, as shown when running az account show

Set these environment variables (you can use the export command) before running vagrant up, or you’ll get an error.

With the right configuration details in place, simply run vagrant up from the same directory where the Vagrant configuration (in Vagrantfile) is stored. Vagrant will communicate with Azure (using the values in the corresponding environment variables) and create/launch the Azure VMs per the details provided.

Once the instances are up, the “standard” Vagrant workflow applies:

  • Use vagrant ssh <name> to log into one of the VMs.
  • Use vagrant provision to apply any provisioning instructions (i.e., to copy files across or run a configuration management tool).
  • Use vagrant destroy to kill/terminate the VMs.

One nice aspect of using Vagrant with Azure in this way is that you get the same workflow and commands to work with Azure VMs as you do with AWS instances, VirtualBox VMs, or VMware Fusion/Workstation VMs.

Why Use Vagrant with Azure?

As I stated in my post on using Vagrant with AWS, using Vagrant with Azure in this fashion is a great fit for the creation/destruction of temporary environments used for testing, software development, etc. However, in situations where you are creating more “permanent” infrastructure—such as deploying production applications onto Azure—then I would say that Vagrant is not the right fit. In those cases, using a tool like Terraform (see my introductory post) would be a better fit.

Additional Resources

To help make using Vagrant with Azure easier, I’ve created a couple learning environments that are part of my GitHub “learning-tools” repository. Specifically, see the vagrant/azure directory for a sample Vagrant configuration that uses an external YAML data file for instance details.

Additionally, see the documentation for the vagrant-azure plugin on GitHub for more information.

Technology Short Take 91

Welcome to Technology Short Take 91! It’s been a bit longer than usual since the last Tech Short Take (partly due to the US Thanksgiving holiday, partly due to vacation time, and partly due to business travel), so apologies for that. Still, there’s a great collection of links and articles here for you, so dig in and enjoy.


  • Amanpreet Singh has a two-part series on Kubernetes networking (part 1, part 2).
  • Anthony Spiteri has a brief look at NSX-T 2.1, which recently launched with support for Pivotal Container Service (PKS) and Pivotal Cloud Foundry, further extending the reach of NSX into new areas.
  • Jon Benedict has a brief article on OVN and its integration into Red Hat Virtualization; if you’re unfamiliar with OVN, it might be worth having a look.
  • sFlow is a networking technology that I find quite interesting, but I never seem to have the time to really dig into it. For example, I was recently browsing the sFlow blog and came across two really neat articles. The first was on RESTful control of Cumulus Linux ACLs (this one isn’t actually sFlow-related); the second was on combining sFlow telemetry and RESTful APIs for visibility and control in campus networks.
  • David Gee’s “network automation engineer persona” content continues; this time he tackles some thoughts around proof-of-concepts (PoCs).


  • Frank Denneman (with an admittedly vSphere-focused lens) takes a look at the Intel Xeon Scalable Family in a two-part (so far) series. Part 1 covers the CPUs themselves; part 2 discusses the memory subsystem. Both articles are worth reviewing if hardware selection is an important aspect of your role.
  • Kevin Houston provides some details on blade server options for VMware vSAN Ready Nodes.


Cloud Computing/Cloud Management

  • The Cloud-Native Computing Foundation (CNCF) and the Kubernetes community introduced the Certified Kubernetes Conformance Program, and the first announcements of certification have started rolling in. First, here’s Google’s announcement of renaming Google Container Engine to Google Kubernetes Engine (making the GKE acronym much more applicable) as a result of its certification. Next, here’s an announcement on the certification of PKS (Pivotal Container Service).
  • Henrik Schmidt writes about the kube-node project, an effort to allow Kubernetes to manage worker nodes in a cluster.
  • Helm is a great way to deploy applications onto (into?) a Kubernetes cluster, but there are some ways you can improve Helm’s security. Check out this article from Matt Butcher on securing Helm.
  • This site is a good collection of “lessons learned from the trenches” on running Kubernetes on AWS in production.
  • I have to be honest: this blog post on using OpenStack Helm to install OpenStack on Kubernetes with Rook sounds like a massive science experiment. That’s a lot of moving pieces!
  • User “sysadmin1138” (I couldn’t find a mapping to a real name, perhaps that’s intentional) has a great write-up on her/his experience with Terraform in production. There’s some great information here for those of you thinking of (or currently) using Terraform to manage production workloads/configurations.

Operating Systems/Applications

  • Michael Crosby outlines support for multi-client support in containerD.
  • Speaking of containerD, it just recently hit 1.0.
  • This is a slightly older post by Alex Ellis on attachable networks, which (as I understand it) enable interoperability between declarative workloads (deployed via docker stack deploy) and imperative workloads (launched via docker run).



Career/Soft Skills

  • Pat Bowden discusses the idea of learning styles, and how combining learning styles (or multiple senses) can typically contribute to more successful learning.
  • I also found some useful tidbits on learning over at The Art of Learning project website.

That’s all for now (but I think that should be enough to keep you busy for a little while, at least!). I’ll have another Tech Short Take in 2 weeks, though given the holiday season is nigh upon us it might be a bit light on content. Until then!

Installing the Azure CLI on Fedora 27

This post is a follow-up to a post from earlier this year on manually installing the Azure CLI on Fedora 25. I encourage you to refer back to that post for a bit of background. I’m writing this post because the procedure for manually installing the Azure CLI on Fedora 27 is slightly different than the procedure for Fedora 25.

Here are the steps to install the Azure CLI into a Python virtual environment on Fedora 27. Even though they are almost identical to the Fedora 25 instructions (one additional package is required), I’m including all the information here for the sake of completeness.

  1. Make sure that the “gcc”, “libffi-devel”, “python-devel”, “openssl-devel”, “python-pip”, and “redhat-rpm-config” packages are installed (you can use dnf to take care of this). Some of these packages may already be installed; during my testing with a Fedora 27 Cloud Base Vagrant image, these needed to be installed. (The change from Fedora 25 is the addition of the “redhat-rpm-config” package.)

  2. Install virtualenv either with pip install virtualenv or dnf install python2-virtualenv. I used dnf, but I don’t think the method you use here will have any material effects.

  3. Create a new Python virtual environment with virtualenv azure-cli (feel free to use a different name).

  4. Activate the new virtual environment (typically accomplished by sourcing the azure-cli/bin/activate script; substitute the name you used when creating the virtual environment if you didn’t name it azure-cli).

  5. Install the Azure CLI with pip install azure-cli. Once this command completes, you should be ready to roll.

That’s it!

Using Vagrant with Libvirt on Fedora 27

In this post, I’m going to show you how to use Vagrant with Libvirt via the vagrant-libvirt provider when running on Fedora 27. Both Vagrant and Libvirt are topics I’ve covered more than a few times here on this site, but this is the first time I’ve discussed combining the two projects.

If you’re unfamiliar with Vagrant, I recommend you start first with my quick introduction to Vagrant, after which you can browse all the “Vagrant”-tagged articles on my site for a bit more information. If you’re unfamiliar with Libvirt, you can browse all my “Libvirt”-tagged articles; I don’t have an introductory post for Libvirt.


I first experimented with the Libvirt provider for Vagrant quite some time ago, but at that time I was using the Libvirt provider to communicate with a remote Libvirt daemon (the use case was using Vagrant to create and destroy KVM guest domains via Libvirt on a remote Linux host). I found this setup to be problematic and error-prone, and discarded it after only a short while.

Recently, I revisited using the Libvirt provider for Vagrant on my Fedora laptop (which I rebuilt with Fedora 27). As I mentioned in this post, installing VirtualBox on Fedora isn’t exactly straightforward. Further, what I didn’t mention in that post is that the VirtualBox kernel modules aren’t signed; this means you must turn off Secure Boot in order to run VirtualBox on Fedora. I was loathe to turn off Secure Boot, so I thought I’d try the Vagrant+Libvirt combination again—this time using Libvirt to talk to the local Libvirt daemon (which is installed by default on Fedora in order to support the GNOME Boxes application, a GUI virtual machine tool). Hence, this blog post.


Obviously, you’ll need Vagrant installed; I chose to install Vagrant from the Fedora repositories using dnf install vagrant. At the time of this writing, that installed version 1.9.8 of Vagrant. You’ll also need the Libvirt plugin, which is available via dnf:

dnf install vagrant-libvirt vagrant-libvirt-doc

At the time of writing, this installed version 0.40.0 of the Libvirt plugin, which is the latest version. You could also install the plugin via vagrant plugin install vagrant-libvirt, though I didn’t test this approach. (In theory, it should work fine.)

As with most other providers (the AWS and OpenStack providers being the exceptions), you’ll also need one or more Vagrant boxes formatted for the Libvirt provider. I found a number of Libvirt-formatted boxes on Vagrant Cloud, easily installable via vagrant box add. For the purposes of this post, I’ll use the “fedora/27-cloud-base” Vagrant box with the Libvirt provider.

Finally, because Vagrant is orchestrating Libvirt on the back-end, I also found it helpful to have the Libvirt client tools (like virsh) installed. This lets you see what Vagrant is doing behind the scenes, which can be helpful at times. Just run dnf install libvirt-client.

Using Libvirt with Vagrant

Once all the necessary prerequisites are satisfied, you’re ready to start managing Libvirt guest domains (VMs) with Vagrant. For a really quick start:

  1. cd into a directory of your choice
  2. Run vagrant init fedora/27-cloud-base to create a sample Vagrantfile
  3. Boot the VM with vagrant up

For more fine-grained control over the VM and its settings, you’ll want to customize the Vagrantfile with some additional settings. Here’s a sample Vagrantfile that shows a few (there are many!) of the ways you could customize the VM Vagrant creates:

Vagrant.configure("2") do |config|
  # Define the Vagrant box to use = "fedora/27-cloud-base"

  # Disable automatic box update checking
  config.vm.box_check_update = false

  # Set the VM hostname
  config.vm.hostname = "fedora27"

  # Attach to an additional private network "private_network", ip: ""

  # Modify some provider settings
  config.vm.provider "libvirt" do |lv|
    lv.memory = "1024"
  end # config.vm.provider
end # Vagrant.configure

For a more complete reference, see the GitHub repository for the vagrant-libvirt provider. Note, however, that I did run into a few oddities, particularly around networking. For example, I wasn’t able to create a new private Libvirt network using the libvirt__network_address setting; it always reverted to the default network address. However, using the syntax shown above, I was able to create a new private Libvirt network with the desired network address. I was also able to manually create a new Libvirt network (using virsh net-create) and then attach the VM to that network using the libvirt__network_name setting in the Vagrantfile. Some experimentation may be necessary to get precisely the results you’re seeking.

Once you’ve instantiated the VM using vagrant up, then the standard Vagrant workflow applies:

  • Use vagrant ssh <name> to log into the VM via SSH.
  • Use vagrant provision to apply any provisioning instructions, such as running a shell script, copying files into the VM, or applying an Ansible playbook.
  • Use vagrant destroy to terminate and delete the VM.

There is one potential “gotcha” of which to be aware: when you use vagrant box remove to remove a Vagrant box and you’ve created at least one VM from that box, then there is an additional step required to fully remove the box. When you run vagrant up with a particular box for the very first time, the Libvirt provider uploads the box into a Libvirt storage pool (the pool named “default”, by default). Running vagrant box remove only removes the files from the ~/.vagrant.d directory, and does not remove any files from the Libvirt storage pool.

To remove the files from the Libvirt storage pool, run virsh pool-edit default to get the filesystem path where the storage pool is found (if no changes have been made, the “default” pool should be located at /var/lib/libvirt/images). Navigate to that directory and remove the appropriate files in order to complete the removal of a particular box (or a specific version of a box).

So far—though my testing has been fairly limited—I’m reasonably pleased with the Libvirt provider when running against a local Libvirt daemon. The performance is good, and I haven’t had to “jump through hoops” to make the virtualization provider work (as I did with VirtualBox on Fedora).

If you have any questions or feedback, hit me up on Twitter. Thanks!

Recent Posts

AWS re:Invent 2017 Keynote with Andy Jassy

This is a liveblog of the re:Invent 2017 keynote with Andy Jassy, taking place on Wednesday at the Venetian. As fully expected given the long queues and massive crowds, even arriving an hour early to the keynote isn’t soon enough; there’s already a huge crowd gathered to make it into the venue. Fortunately, I did make it in and scored a reasonable seat from which to write this liveblog.


Liveblog: Deep Dive on Amazon Elastic File System

This is a liveblog of the AWS re:Invent 2017 session titled “Deep Dive on Amazon Elastic File System (EFS).” The presenters are Edward Naim and Darryl Osborne, both with AWS. This is my last session of day 2 of re:Invent; thus far, most of my time has been spent in hands-on workshops with only a few breakout sessions today. EFS is a topic I’ve watched, but haven’t had time to really dig into, so I’m looking forward to this session.


Liveblog: IPv6 in the Cloud - Protocol and Service Overview

This is a liveblog of an AWS re:Invent 2017 breakout session titled “IPv6 in the Cloud: Protocol and Service Overview.” The presenter’s name is Alan Halachmi, who is a Senior Manager of Solutions Architecture at AWS. As with so many of the other breakout sessions and workshops here at re:Invent this year, the queues to get into the session are long and it’s expected that the session will be completely full.


A Sample Makefile for Creating Blog Articles

In October of this year, I published a blog post talking about a sample Makefile for publishing blog articles. That post focused on the use of make and a Makefile for automating the process of a publishing a blog post. This post is a companion to that post, and focuses on the use of a Makefile for automating the creation of blog posts.


Installing MultiMarkdown 6 on Fedora 27

Long-time readers are probably aware that I’m a big fan of Markdown. Specifically, I prefer the MultiMarkdown variant that adds some additional extensions beyond “standard” Markdown. As such, I’ve long used Fletcher Penny’s MultiMarkdown processor (the latest version, version 6, is available on GitHub). While Fletcher offers binary builds for Windows and macOS, the Linux binary has to be compiled from source. In this post, I’ll provide the steps I followed to compile a MultiMarkdown binary for Fedora 27.


Using Docker Machine with KVM and Libvirt

Docker Machine is, in my opinion, a useful and underrated tool. I’ve written before about using Docker Machine with various services/providers; for example, see this article on using Docker Machine with AWS, or this article on using Docker Machine with OpenStack. Docker Machine also supports local hypervisors, such as VMware Fusion or VirtualBox. In this post, I’ll show you how to use Docker Machine with KVM and Libvirt on a Linux host (I’m using Fedora 27 as an example).


Happy Thanksgiving 2017

In the US, today (Thursday, November 23) is Thanksgiving. I’d like to take a moment to reflect on the meaning of Thanksgiving.


Installing Older Docker Client Binaries on Fedora

Sometimes there’s a need to have different versions of the Docker client binary available. On Linux this can be a bit challenging because you don’t want to install a “full” Docker package (which would also include the Docker daemon); you only need the binary. In this article, I’ll outline a process I followed to get multiple (older) versions of the Docker client binary on my Fedora 27 laptop.


Installing Postman on Fedora 27

I recently had a need to install the Postman native app on Fedora 27. The Postman site itself only provides a link to the download and a rather generic set of instructions for installing the Postman native app (a link to these instructions for Ubuntu 16.04 is also provided). There were not, however, any directions for Fedora. Hence, I’m posting the steps I took to set up the Postman native app on my Fedora 27 laptop.


Making AWS re:Invent More Family-Friendly

AWS re:Invent is just around the corner, and Spousetivities will be there to help bring a new level of family friendliness to the event. If you’re thinking of bringing a spouse, partner, or significant other with you to Las Vegas, I’d encourage you to strongly consider getting him or her involved in Spousetivities.


Technology Short Take 90

Welcome to Technology Short Take 90! This post is a bit shorter than most, as I’ve been on the road quite a bit recently. Nevertheless, there’s hopefully something here you’ll find useful.


How to Tag Docker Images with Git Commit Information

I’ve recently been working on a very simple Flask application that can be used as a demo application in containerized environments (here’s the GitHub repo). It’s nothing special, but it’s been useful for me as a learning exercise—both from a Docker image creation perspective as well as getting some additional Python knowledge. Along the way, I wanted to be able to track versions of the Docker image (and the Dockerfile used to create those images), and link those versions back to specific Git commits in the source repository. In this article, I’ll share a way I’ve found to tag Docker images with Git commit information.


Deep Dive into Container Images in Kolla

This is a liveblog of my last session at the Sydney OpenStack Summit. The session title is “OpenStack images that fit your imagination: deep dive into container images in Kolla.” The presenters are Vikram Hosakote and Rich Wellum, from Cisco and Lenovo, respectively.


Carrier-Grade SDN-Based OpenStack Networking Solution

This session was titled “Carrier-Grade SDN Based OpenStack Networking Solution,” led by Daniel Park and Sangho Shin. Both Park and Shin are from SK Telecom (SKT), and (based on the description) this session is a follow-up to a session from the Boston summit where SK Telecom talked about an SDN-based networking solution they’d developed and released for use in their own 5G-based network.


Can OpenStack Beat AWS in Price

This is a liveblog of the session titled “Can OpenStack Beat AWS in Price: The Trilogy”. The presenters are Rico Lin, Bruno Lago, and Jean-Daniel Bonnetot. The “trilogy” refers to the third iteration of this presentation; each time the comparison has been done in a different geographical region (first in Europe, then in North America, and finally here in Asia-Pacific).


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!