Scott's Weblog The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view

Installing XMind 8 on Fedora 27

XMind is a well-known cross-platform mind mapping application. Installing the latest version of XMind (version 8) on Linux is, unfortunately, more complicated than it should be. In this post, I’ll show how to get XMind 8 running on Fedora 27.

So why is installing XMind more complicated than it should be? For reasons unknown, the makers of XMind stopped using well-known Linux package mechanisms with this version of the software, providing only a ZIP archive to download and extract. (Previous versions at least provided a Debian package.) While the ZIP archive includes a very simplistic “setup script”, the script does nothing more than install a few packages and install some fonts, and was written expressly for Debian-based systems. If you extract the archive and place the files outside of your home directory (as would be typical for installing an application on most desktop Linux distributions), you’ll run into problems with permissions. Finally, the application itself is extraordinarily brittle with regards to file locations and such; it’s easy to break it by simply moving the wrong file.

Through some research and some trial-and-error, I finally arrived at a configuration for XMind 8 on Fedora 27 that satisfies a couple criteria:

  1. The application should reside outside the user’s home directory in a location that is typical for third-party applications (for example, in the /opt directory).

  2. All user-specific directories and information would reside in the user’s home directory so as to eliminate the need for overly-permissive file/group permissions.

Here are the steps you can follow to get XMind 8 installed on Fedora 27 in a way that satisfies these criteria.

First, you’ll need to install the “java-1.8.0-openjdk” package using dnf install. XMind has a few different prerequisite packages (lame, webkitgtk, and glibc), but in my tests on Fedora 27 system this was the only package that wasn’t already installed. Note that if you’re trying to replicate these instructions on a different Linux distribution, this step is where you’ll need to locate distribution-specific package names (most of the rest of the steps are applicable to any Linux distribution).

Next, download the XMind 8 ZIP archive, and extract it into an xmind directory:

unzip -d xmind

For simplicity’s sake, I chose to work within my own ~/Downloads directory, but you should be able to work from within any directory where you have write permissions.

Third, create a user-specific area for XMind to store information:

mkdir -p ~/.config/xmind/workspace

I chose the ~/.config directory since it was already present and utilized for application-specific information. You can use a different directory, but the rest of the instructions will assume this path was used.

Fourth, go ahead and remove the 32-bit version of the XMind executable; it’s pretty likely you won’t need it:

rm -rf xmind/XMind_i386

Fifth, copy over two directories into the user-specific area you created earlier:

cp -a xmind/XMind_amd64/configuration ~/.config/xmind/
cp -a xmind/XMind_amd64/p2 ~/.config/xmind/

Based on my testing, this step should be the only step needed to make XMind work for additional users on your Fedora system (i.e., you’ll want to run this step for other users on the system as well in order for them to be able to run XMind).

Next, you’ll need to update XMind.ini to tell XMind the new user-specific locations. Edit this file (found in the XMind_amd64 subdirectory), and make the following changes:

  • On line 2, change ./configuration to @user.home/.config/xmind/configuration (the @user.home refers to the user’s home directory; note that you can’t use the tilde shortcut here as it won’t work)
  • On line 4, change ../workspace to @user.home/.config/xmind/workspace

You’re almost done! In the fonts subdirectory of the xmind directory in which you stored the extracted files, you’ll find some fonts that are distributed with XMind. Install these on your system (copy them to ~/.local/share/fonts or /usr/share/fonts, as you desire), and then remove the fonts subdirectory. In my particular case, some of the fonts—like the Roboto family—were already installed.

Finally, move the xmind directory to its final location and lock down the permissions:

sudo mv xmind /opt/
sudo chown -R root:root /opt/xmind

The last and final step is to create a desktop entry file so that XMind is accessible via the launcher. Here’s a sample file:

[Desktop Entry]
Comment=Create and share mind maps
Exec=/opt/xmind/XMind_amd64/XMind %F

This desktop file can go into /usr/share/applications or ~/.local/share/applications, though—for reasons I’ll share shortly—the latter may be a better choice.

That’s it! You should be able to launch XMind from the GNOME Activities screen.

Unfortunately, there are some caveats and limitations:

  • XMind apparently stores its application icon buried deep in the ~/.config/xmind/configuration directory structure. This is why you may prefer to use ~/.local/share/applications for the desktop file. If you do use this location, you’ll need to perform this step for other users of the system as well. (The icon=xmind line in the sample desktop file above won’t actually work.)
  • This manual installation doesn’t create a MIME type for XMind documents so that you can open an XMind document from within the Nautilus file manager. (The simplistic shell script supplied in the XMind download doesn’t either.) I’ve done a bit of work on this, but haven’t come to a workable solution yet.

So there you have it: how to get XMind 8 installed and running in some reasonable fashion on Fedora 27. If you have questions, comments, or corrections, feel free to hit me up on Twitter.

Installing the VMware Horizon Client on Fedora 27

In this post, I’ll outline the steps necessary to install the VMware Horizon client for Linux on Fedora 27. Although VMware provides an “install bundle,” the bundle does not, unfortunately, address any of the prerequisites that are necessary in order for the Horizon client to work. Fortunately, some other folks shared their experiences, and building on their knowledge I was able to make it work. I hope that this post will, in turn, help others who may find themselves in the same situation.

Based on information found here and here, I took the following steps before attempting to install the VMware Horizon client for Linux:

  1. First, I installed the libpng12 package using sudo dnf install libpng12.

  2. I then created a symbolic link for the library that the Horizon client requires:

    sudo ln -s /usr/lib64/ /usr/lib64/
  3. I created a symbolic link for the library the Horizon client expects to have available:

    sudo ln -s /usr/lib64/ /usr/lib64/

With these packages and symbolic links in place, I proceeded to install the VMware Horizon client using the install bundle downloaded from the public VMware web site (for version 4.6.0 of the client). Per the guidelines in this GitHub gist, I deselected most all of the options (Smart Card, Real-Time Audio-Video, and Multimedia Redirection were all deselected; if I recall correctly, only Virtual Printing, Client Drive Redirection, and USB Redirection were left selected). The installation proceeded without incident, and the scan at the end reported success on all fronts.

Once the installation was complete, I was able to launch the Horizon client and proceed without further issues.

Here’s hoping this information helps others who may be looking to use the Horizon Client for Linux. If you have questions, feel free to hit me up on Twitter.

Using Vagrant with Azure

In this post, I’ll describe how to use Vagrant with Azure. You can consider this article an extension of some of my earlier Vagrant articles; namely, the posts on using Vagrant with AWS and using Vagrant with OpenStack. The theme across all these posts is examining how one might use Vagrant to simplify/streamline the consumption of resources from a provider using the familiar Vagrant workflow.

If you aren’t already familiar with Vagrant, I’d highly recommend first taking a look at my introduction to Vagrant, which provides an overview of the tool and how it’s used.


Naturally, you’ll need to first ensure that you have Vagrant installed. This is really well-documented already, so I won’t go over it here. Next, you’ll need to install the Azure provider for Vagrant, which you can handle using this command:

vagrant plugin install vagrant-azure

You’ll also (generally) want to have the Azure CLI installed. (You’ll need it for a one-time configuration task I’ll mention shortly.) I’ve published a couple posts on installing the Azure CLI; see here or here.

Once you’ve installed the vagrant-azure plugin and the Azure CLI, you’ll next need to install a box that Vagrant can use. Here, the use of Vagrant with Azure is different than the use of Vagrant with a provider like VirtualBox or VMware Fusion/VMware Workstation. Like when using Vagrant with AWS, when you’re using Vagrant with Azure the box is a “dummy box” that doesn’t really do anything; instead, you’re relying on Azure VM images. You can install a “dummy box” for Azure with this command:

vagrant box add azure-dummy --provider azure

You’ll then reference this “dummy box” in your Vagrantfile, as I’ll illustrate shortly.

The last and final step is to create an Azure Active Directory (AD) service principal for Vagrant to use when connecting to Azure. This command will create a service principal for you to use with Vagrant:

az ad sp create-for-rbac

Make note of the values returned in the command’s JSON response; you’ll need them later. You’ll also want to know your Azure subscription ID, which you can obtain by running az account list --query '[?isDefault].id' -o tsv.

Now that you’ve installed Vagrant, installed the vagrant-azure plugin, downloaded the dummy Azure box, and have created the Azure AD service principal, you’re ready to start spawning some Azure VMs with Vagrant.

Launching Azure VMs via Vagrant

When I use Vagrant, I prefer to keep the Vagrant configuration (stored in the file named Vagrantfile) as clean as possible, and separate details into a separate data file (typically using YAML). I outlined this approach here and here. For the purposes of this post, however, I’ll just embed the details directly into the Vagrant configuration to make it easier to understand. I have examples of using a YAML data file with Vagrant and Azure in the “Additional Resources” section below.

Here’s a snippet of a Vagrantfile you could use to spin up Azure VMs using Vagrant:

# Require the Azure provider plugin
require 'vagrant-azure'

# Create and configure the Azure VMs
Vagrant.configure('2') do |config|

  # Use dummy Azure box = 'azure-dummy'

  # Specify SSH key
  config.ssh.private_key_path = '~/.ssh/id_rsa'

  # Configure the Azure provider
  config.vm.provider 'azure' do |az, override|
    # Pull Azure AD service principal information from environment variables
    az.tenant_id = ENV['AZURE_TENANT_ID']
    az.client_id = ENV['AZURE_CLIENT_ID']
    az.client_secret = ENV['AZURE_CLIENT_SECRET']
    az.subscription_id = ENV['AZURE_SUBSCRIPTION_ID']

    # Specify VM parameters
    az.vm_name = 'aztest'
    az.vm_size = 'Standard_B1s'
    az.vm_image_urn = 'Canonical:UbuntuServer:16.04-LTS:latest'
    az.resource_group_name = 'vagrant'
  end # config.vm.provider 'azure'
end # Vagrant.configure

Naturally, this is a very generic configuration, so you’d need to supply the specific details you want use. In particular, you’d need to supply the following details:

  • The SSH keypair you want to use (via the config.ssh.private_key_path setting)
  • The Azure VM size you’d like to use (via the az.vm_size setting)
  • The Azure VM image you want to use (via the az.vm_image_urn setting)
  • The name of the Azure resource group you’d like to use (via the az.resource_group_name setting; this group should not already exist)

Also, you’ll note that the Vagrantfile assumes you’ve set some environment variables that will allow Vagrant to communicate with Azure. These values are taken from the JSON output of creating the Azure AD service principal:

  • The AZURE_TENANT_ID maps to the “tenant” key of the JSON output
  • The AZURE_CLIENT_ID maps to the “appID” key of the JSON output
  • The AZURE_CLIENT_SECRET maps to the “password” key of the JSON output
  • The AZURE_SUBSCRIPTION_ID is your Azure subscription ID, as shown when running az account show

Set these environment variables (you can use the export command) before running vagrant up, or you’ll get an error.

With the right configuration details in place, simply run vagrant up from the same directory where the Vagrant configuration (in Vagrantfile) is stored. Vagrant will communicate with Azure (using the values in the corresponding environment variables) and create/launch the Azure VMs per the details provided.

Once the instances are up, the “standard” Vagrant workflow applies:

  • Use vagrant ssh <name> to log into one of the VMs.
  • Use vagrant provision to apply any provisioning instructions (i.e., to copy files across or run a configuration management tool).
  • Use vagrant destroy to kill/terminate the VMs.

One nice aspect of using Vagrant with Azure in this way is that you get the same workflow and commands to work with Azure VMs as you do with AWS instances, VirtualBox VMs, or VMware Fusion/Workstation VMs.

Why Use Vagrant with Azure?

As I stated in my post on using Vagrant with AWS, using Vagrant with Azure in this fashion is a great fit for the creation/destruction of temporary environments used for testing, software development, etc. However, in situations where you are creating more “permanent” infrastructure—such as deploying production applications onto Azure—then I would say that Vagrant is not the right fit. In those cases, using a tool like Terraform (see my introductory post) would be a better fit.

Additional Resources

To help make using Vagrant with Azure easier, I’ve created a couple learning environments that are part of my GitHub “learning-tools” repository. Specifically, see the vagrant/azure directory for a sample Vagrant configuration that uses an external YAML data file for instance details.

Additionally, see the documentation for the vagrant-azure plugin on GitHub for more information.

Technology Short Take 91

Welcome to Technology Short Take 91! It’s been a bit longer than usual since the last Tech Short Take (partly due to the US Thanksgiving holiday, partly due to vacation time, and partly due to business travel), so apologies for that. Still, there’s a great collection of links and articles here for you, so dig in and enjoy.


  • Amanpreet Singh has a two-part series on Kubernetes networking (part 1, part 2).
  • Anthony Spiteri has a brief look at NSX-T 2.1, which recently launched with support for Pivotal Container Service (PKS) and Pivotal Cloud Foundry, further extending the reach of NSX into new areas.
  • Jon Benedict has a brief article on OVN and its integration into Red Hat Virtualization; if you’re unfamiliar with OVN, it might be worth having a look.
  • sFlow is a networking technology that I find quite interesting, but I never seem to have the time to really dig into it. For example, I was recently browsing the sFlow blog and came across two really neat articles. The first was on RESTful control of Cumulus Linux ACLs (this one isn’t actually sFlow-related); the second was on combining sFlow telemetry and RESTful APIs for visibility and control in campus networks.
  • David Gee’s “network automation engineer persona” content continues; this time he tackles some thoughts around proof-of-concepts (PoCs).


  • Frank Denneman (with an admittedly vSphere-focused lens) takes a look at the Intel Xeon Scalable Family in a two-part (so far) series. Part 1 covers the CPUs themselves; part 2 discusses the memory subsystem. Both articles are worth reviewing if hardware selection is an important aspect of your role.
  • Kevin Houston provides some details on blade server options for VMware vSAN Ready Nodes.


Cloud Computing/Cloud Management

  • The Cloud-Native Computing Foundation (CNCF) and the Kubernetes community introduced the Certified Kubernetes Conformance Program, and the first announcements of certification have started rolling in. First, here’s Google’s announcement of renaming Google Container Engine to Google Kubernetes Engine (making the GKE acronym much more applicable) as a result of its certification. Next, here’s an announcement on the certification of PKS (Pivotal Container Service).
  • Henrik Schmidt writes about the kube-node project, an effort to allow Kubernetes to manage worker nodes in a cluster.
  • Helm is a great way to deploy applications onto (into?) a Kubernetes cluster, but there are some ways you can improve Helm’s security. Check out this article from Matt Butcher on securing Helm.
  • This site is a good collection of “lessons learned from the trenches” on running Kubernetes on AWS in production.
  • I have to be honest: this blog post on using OpenStack Helm to install OpenStack on Kubernetes with Rook sounds like a massive science experiment. That’s a lot of moving pieces!
  • User “sysadmin1138” (I couldn’t find a mapping to a real name, perhaps that’s intentional) has a great write-up on her/his experience with Terraform in production. There’s some great information here for those of you thinking of (or currently) using Terraform to manage production workloads/configurations.

Operating Systems/Applications

  • Michael Crosby outlines support for multi-client support in containerD.
  • Speaking of containerD, it just recently hit 1.0.
  • This is a slightly older post by Alex Ellis on attachable networks, which (as I understand it) enable interoperability between declarative workloads (deployed via docker stack deploy) and imperative workloads (launched via docker run).



Career/Soft Skills

  • Pat Bowden discusses the idea of learning styles, and how combining learning styles (or multiple senses) can typically contribute to more successful learning.
  • I also found some useful tidbits on learning over at The Art of Learning project website.

That’s all for now (but I think that should be enough to keep you busy for a little while, at least!). I’ll have another Tech Short Take in 2 weeks, though given the holiday season is nigh upon us it might be a bit light on content. Until then!

Installing the Azure CLI on Fedora 27

This post is a follow-up to a post from earlier this year on manually installing the Azure CLI on Fedora 25. I encourage you to refer back to that post for a bit of background. I’m writing this post because the procedure for manually installing the Azure CLI on Fedora 27 is slightly different than the procedure for Fedora 25.

Here are the steps to install the Azure CLI into a Python virtual environment on Fedora 27. Even though they are almost identical to the Fedora 25 instructions (one additional package is required), I’m including all the information here for the sake of completeness.

  1. Make sure that the “gcc”, “libffi-devel”, “python-devel”, “openssl-devel”, “python-pip”, and “redhat-rpm-config” packages are installed (you can use dnf to take care of this). Some of these packages may already be installed; during my testing with a Fedora 27 Cloud Base Vagrant image, these needed to be installed. (The change from Fedora 25 is the addition of the “redhat-rpm-config” package.)

  2. Install virtualenv either with pip install virtualenv or dnf install python2-virtualenv. I used dnf, but I don’t think the method you use here will have any material effects.

  3. Create a new Python virtual environment with virtualenv azure-cli (feel free to use a different name).

  4. Activate the new virtual environment (typically accomplished by sourcing the azure-cli/bin/activate script; substitute the name you used when creating the virtual environment if you didn’t name it azure-cli).

  5. Install the Azure CLI with pip install azure-cli. Once this command completes, you should be ready to roll.

That’s it!

Recent Posts

Using Vagrant with Libvirt on Fedora 27

In this post, I’m going to show you how to use Vagrant with Libvirt via the vagrant-libvirt provider when running on Fedora 27. Both Vagrant and Libvirt are topics I’ve covered more than a few times here on this site, but this is the first time I’ve discussed combining the two projects.


AWS re:Invent 2017 Keynote with Andy Jassy

This is a liveblog of the re:Invent 2017 keynote with Andy Jassy, taking place on Wednesday at the Venetian. As fully expected given the long queues and massive crowds, even arriving an hour early to the keynote isn’t soon enough; there’s already a huge crowd gathered to make it into the venue. Fortunately, I did make it in and scored a reasonable seat from which to write this liveblog.


Liveblog: Deep Dive on Amazon Elastic File System

This is a liveblog of the AWS re:Invent 2017 session titled “Deep Dive on Amazon Elastic File System (EFS).” The presenters are Edward Naim and Darryl Osborne, both with AWS. This is my last session of day 2 of re:Invent; thus far, most of my time has been spent in hands-on workshops with only a few breakout sessions today. EFS is a topic I’ve watched, but haven’t had time to really dig into, so I’m looking forward to this session.


Liveblog: IPv6 in the Cloud - Protocol and Service Overview

This is a liveblog of an AWS re:Invent 2017 breakout session titled “IPv6 in the Cloud: Protocol and Service Overview.” The presenter’s name is Alan Halachmi, who is a Senior Manager of Solutions Architecture at AWS. As with so many of the other breakout sessions and workshops here at re:Invent this year, the queues to get into the session are long and it’s expected that the session will be completely full.


A Sample Makefile for Creating Blog Articles

In October of this year, I published a blog post talking about a sample Makefile for publishing blog articles. That post focused on the use of make and a Makefile for automating the process of a publishing a blog post. This post is a companion to that post, and focuses on the use of a Makefile for automating the creation of blog posts.


Installing MultiMarkdown 6 on Fedora 27

Long-time readers are probably aware that I’m a big fan of Markdown. Specifically, I prefer the MultiMarkdown variant that adds some additional extensions beyond “standard” Markdown. As such, I’ve long used Fletcher Penny’s MultiMarkdown processor (the latest version, version 6, is available on GitHub). While Fletcher offers binary builds for Windows and macOS, the Linux binary has to be compiled from source. In this post, I’ll provide the steps I followed to compile a MultiMarkdown binary for Fedora 27.


Using Docker Machine with KVM and Libvirt

Docker Machine is, in my opinion, a useful and underrated tool. I’ve written before about using Docker Machine with various services/providers; for example, see this article on using Docker Machine with AWS, or this article on using Docker Machine with OpenStack. Docker Machine also supports local hypervisors, such as VMware Fusion or VirtualBox. In this post, I’ll show you how to use Docker Machine with KVM and Libvirt on a Linux host (I’m using Fedora 27 as an example).


Happy Thanksgiving 2017

In the US, today (Thursday, November 23) is Thanksgiving. I’d like to take a moment to reflect on the meaning of Thanksgiving.


Installing Older Docker Client Binaries on Fedora

Sometimes there’s a need to have different versions of the Docker client binary available. On Linux this can be a bit challenging because you don’t want to install a “full” Docker package (which would also include the Docker daemon); you only need the binary. In this article, I’ll outline a process I followed to get multiple (older) versions of the Docker client binary on my Fedora 27 laptop.


Installing Postman on Fedora 27

I recently had a need to install the Postman native app on Fedora 27. The Postman site itself only provides a link to the download and a rather generic set of instructions for installing the Postman native app (a link to these instructions for Ubuntu 16.04 is also provided). There were not, however, any directions for Fedora. Hence, I’m posting the steps I took to set up the Postman native app on my Fedora 27 laptop.


Making AWS re:Invent More Family-Friendly

AWS re:Invent is just around the corner, and Spousetivities will be there to help bring a new level of family friendliness to the event. If you’re thinking of bringing a spouse, partner, or significant other with you to Las Vegas, I’d encourage you to strongly consider getting him or her involved in Spousetivities.


Technology Short Take 90

Welcome to Technology Short Take 90! This post is a bit shorter than most, as I’ve been on the road quite a bit recently. Nevertheless, there’s hopefully something here you’ll find useful.


How to Tag Docker Images with Git Commit Information

I’ve recently been working on a very simple Flask application that can be used as a demo application in containerized environments (here’s the GitHub repo). It’s nothing special, but it’s been useful for me as a learning exercise—both from a Docker image creation perspective as well as getting some additional Python knowledge. Along the way, I wanted to be able to track versions of the Docker image (and the Dockerfile used to create those images), and link those versions back to specific Git commits in the source repository. In this article, I’ll share a way I’ve found to tag Docker images with Git commit information.


Deep Dive into Container Images in Kolla

This is a liveblog of my last session at the Sydney OpenStack Summit. The session title is “OpenStack images that fit your imagination: deep dive into container images in Kolla.” The presenters are Vikram Hosakote and Rich Wellum, from Cisco and Lenovo, respectively.


Carrier-Grade SDN-Based OpenStack Networking Solution

This session was titled “Carrier-Grade SDN Based OpenStack Networking Solution,” led by Daniel Park and Sangho Shin. Both Park and Shin are from SK Telecom (SKT), and (based on the description) this session is a follow-up to a session from the Boston summit where SK Telecom talked about an SDN-based networking solution they’d developed and released for use in their own 5G-based network.


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!