OSS

You are currently browsing articles tagged OSS.

In April of this year, we started a series of articles at Network Heresy on the topic of policy in the data center. The first of these articles, which I mentioned in this post, focused on the problem of policy in the data center. This was a great introduction to the need for policy and the challenges with the current ways of addressing policy in the data center.

A short while ago, we published the second of our series on policy, titled “On Policy in the Data Center: The solution space”. This post describes the key features/functionality that a policy system must have to address the challenges identified in part 1 of the series. In a nutshell (I highly recommend you go read the full article), these key areas include:

  • The sources from which policy is derived
  • The language(s) used to express policy
  • The way policy systems interact with data center services
  • The actions a policy system can take

I really liked this statement from the article (this is in reference to how a policy system interacts with other services in the data center):

A policy system by itself is useless; to have value, the policy system must interact and integrate with other data center or cloud services.

The relationship between a policy system and the ecosystem of data center services with which it interacts is so critical. Having a policy system is great, but if the policy system can’t be integrated with other data center or cloud services, then it’s not very useful, is it?

Go have a look at the second post in the series on policy in the data center and feel free to join in the conversation. You can leave comments here or at the Network Heresy site.

Tags: , ,

In an earlier post, I provided an introduction to OpenStack Heat, and provided an example Heat template that launched two instances with a logical network and a logical router. Here I am going to provide another view of a Heat template that does the same thing, but uses YAML and the HOT format instead of JSON and the CFN format.

Here’s the full template (click here if the code box below isn’t showing up):

I won’t walk through the whole template again, but rather just talk briefly about a couple of the differences between this YAML-encoded template and the earlier JSON-encoded template:

  • You’ll note the syntax is much simpler. JSON can trip you up on commas and such if you’re not careful; YAML is simpler and cleaner.
  • You’ll note the built-in functions are different, as I pointed out in my first Heat post. Instead of using Ref to refer to an object defined elsewhere in the template, HOT uses get_resource instead.

Aside from these differences, you’ll note that the resource types and properties match between the two; this is because resource types are separate and independent from the template format.

Feel free to post any questions, corrections, or clarifications in the comments below. Thanks for reading!

Tags: , , ,

In this post, I’m going to provide a quick introduction to OpenStack Heat, the orchestration service that allows you to spin up multiple instances, logical networks, and other cloud services in an automated fashion. Note that this is only an introductory post—I’m not an expert on Heat, but I did want to share at least some basic information to help others get started as well.

Let’s start with some terminology, so that there is no confusion about the terms later when we start using them in specific examples:

  • Stack: In Heat parlance, a stack is the collection of objects—or resources—that will be created by Heat. This might include instances (VMs), networks, subnets, routers, ports, router interfaces, security groups, security group rules, auto-scaling rules, etc.
  • Template: Heat uses the idea of a template to define a stack. If you wanted to have a stack that created two instances connected by a private network, then your template would contain the definitions for two instances, a network, a subnet, and two network ports. Since templates are central to how Heat operates, I’ll show you examples of templates in this post.
  • Parameters: A Heat template has three major sections, and one of those sections defines the template’s parameters. These are tidbits of information—like a specific image ID, or a particular network ID—that are passed to the Heat template by the user. This allows users to create more generic templates that could potentially use different resources.
  • Resources: Resources are the specific objects that Heat will create and/or modify as part of its operation, and the second of the three major sections in a Heat template.
  • Output: The third and last major section of a Heat template is the output, which is information that is passed to the user, either via OpenStack Dashboard or via the heat stack-list and heat stack-show commands.
  • HOT: Short for Heat Orchestration Template, HOT is one of two template formats used by Heat. HOT is not backwards-compatible with AWS CloudFormation templates and can only be used with OpenStack. Templates in HOT format are typically—but not necessarily required to be—expressed as YAML (more information on YAML here). (I’ll do my best to avoid saying “HOT template,” as that would be redundant, wouldn’t it?)
  • CFN: Short for AWS CloudFormation, this is the second template format that is supported by Heat. CFN-formatted templates are typically expressed in JSON (see here and see my non-programmer’s introduction to JSON for more information on JSON specifically).

OK, that should be enough to get us going. (BTW, the OpenStack Heat documentation actually has a really good glossary. Please note that this link might break as OpenStack development continues.)

Architecturally, Heat has a few major components:

  • The heat-api component implements an OpenStack-native RESTful API. This components processes API requests by sending them to the Heat engine via AMQP.
  • The heat-api-cfn component provides an API compatible with AWS CloudFormation, and also forwards API requests to the Heat engine over AMQP.
  • The heat-engine component provides the main orchestration functionality.

All of these components would typically be installed on an OpenStack “controller” node that also housed the API servers for Nova, Glance, Neutron, etc. As far as I know, though, there is nothing that requires them to be installed on the same system. Like most of the rest of the OpenStack services, Heat uses a back-end database for maintaining state information.

Now that you have an idea about Heat’s architecture, I’ll walk you through an example template that I created and tested on my own OpenStack implementation (running OpenStack Havana on Ubuntu 12.04 with KVM and VMware NSX). Here’s the full template:

(Can’t see the code above? Click here.)

Let’s walk through this template real quick:

  • First, note that I’ve specified the template version as “AWSTemplateFormatVersion”. One thing that confused me at first was the relationship between the template format (CFN vs. HOT) and resource types. It turns out these are independent of one another; you can—as I have done here—use HOT resource types (like OS::Neutron::Net) in a CFN template. Obviously, if you use HOT resources you’re not fully compatible with AWS. Also, as I stated earlier, CFN templates are typically expressed in JSON (as mine is). Heat does support YAML for CFN templates, although again you’d be sacrificing AWS compatibility.
  • You’ll note that my template skips any use of parameters and goes straight to resources. This is perfectly acceptable, although it means that some values (like the shared public provider network to which the logical router uplinks and the security group) have to be hard-coded in the template.
  • One thing that the template format does control is some of the syntax. So, for example, you’ll note the template uses “Resources”, “Type”, and “Properties.” In some of the other template formats, these could be specified lowercase.
  • The first resource defined is a logical network, defined as type OS::Neutron::Net.
  • The next resource is a subnet (of type OS::Neutron::Subnet), which is associated with the previously-defined logical network through the use of the Ref built-in function on line 20. Built-in functions are another thing controlled by the template format, so when you want to refer to another object in a CFN template, you’ll use the Ref function as I did here. This associates the “network_id” property of the subnet with the logical network defined just prior. You’ll also note that the subnet resource has a number of properties associated with it—CIDR, DNS name servers, DHCP, and gateway IP address.
  • The third resource defined is a logical router.
  • After the logical router is defined, the template links the logical router to a pre-existing provider network via the OS::Neutron::RouterGateway type. (This was deprecated in Icehouse in favor of an external_gateway_info property on the logical router.) The UUID listed there is the UUID of a pre-existing provider network. Note the use of the Ref function again to link this resource back to the logical router.
  • Next up the template creates an interface on the logical router, using two Ref instances to link this router interface back to the logical router and the subnet created earlier. This means we are adding an interface to the referenced logical router on the specified subnet (and that interface will assume the IP address specified by the “gateway_ip” property on the subnet).
  • Next the template creates two Neutron ports, and links them to the default security group. Note that if you don’t specify a security group when creating the Neutron port, it will have none—and no traffic will pass.
  • Finally, the Heat template creates two instances (type OS::Nova::Server), using the “m1.xsmall” flavor and a hard-coded Glance image ID. These instances are connected to the Neutron ports created earlier using the Ref function once more.

(In case it wasn’t obvious already, you can’t just copy-and-paste this Heat template and use it in your own environment, as it references UUIDs for objects in my environment that won’t be the same.)

If you are going to use JSON (as I have here), then I’d recommend bookmarking a JSON validation site, such as jsonlint.com.

Once you have your Heat template defined, you can then use this template to create a stack, either via the heat CLI client or via the OpenStack Dashboard. I’ll attach a screenshot from a stack that I deployed via the Dashboard so that you can see what it looks like (click the image for a larger version):

A deployed Heat stack in OpenStack Dashboard

Kinda nifty, don’t you think? Anyway, I hope this brief introduction to OpenStack Heat has proven useful. I do plan on covering some additional topics with OpenStack Heat in the near future, so stay tuned. In the meantime, if you have any questions, corrections, or clarifications, I invite you to add them to the comments below.

Tags: , , , ,

Most IT vendors agree that more extensive use of automation and orchestration in today’s data centers are beneficial to customers. The vendors may vary in their approach to providing this automation and orchestration—some may prefer to do it in software (VMware would be one of these, along with other software companies like Microsoft and Red Hat), while others want to do it in hardware. There are advantages and disadvantages to each approach, naturally, and customers need to evaluate the various solutions against their own requirements to find the best fit.

However, the oft-overlooked problem that more extensive use of automation and orchestration creates is one of control—specifically, how customers can control this automation and orchestration according to their own specific policy. A recent post on the Network Heresy site discusses the need for policy in fully automated IT environments:

However, fully automated IT management is a double-edged sword. While having people on the critical path for IT management was time-consuming, it provided an opportunity to ensure that those resources were managed sensibly and in a way that was consistent with how the business said they ought to be managed. In other words, having people on the critical path enabled IT resources to be managed according to business policy. We cannot simply remove those people without also adding a way of ensuring that IT resources obey business policy—without introducing a way of ensuring that IT resources retain the same level of policy compliance.

VMware, along with a number of other companies, has launched an open source effort to address this challenge: finding a way to enable customers to manage their resources according to their business policy, and do so in a cloud-agnostic way. This effort is called Congress, and it has received some attention from those who think it’s a critical project). I’m really excited to be involved in this project, and I’m also equally excited to be working with some extremely well-respected individuals across a number of different companies (this is most definitely not a VMware-only project). I believe that creating an open source solution to the policy problem will further the cause of cloud computing and the transformation of our industry. I strongly urge you to read this first post, titled “On Policy in the Data Center: The policy problem”, and stay tuned for future blog posts that will dive into even greater detail. Exciting times are ahead!

Tags: , ,

Recently a couple of open source software (OSS)-related announcements have passed through my Inbox, so I thought I’d make brief mention of them here on the site.

Mirantis OpenStack

Last week Mirantis announced the general availability of Mirantis OpenStack, its own commercially-supported OpenStack distribution. Mirantis joins a number of other vendors also offering OpenStack distributions, though Mirantis claims to be different on the basis that its OpenStack distribution is not tied to a particular Linux distribution. Mirantis is also differentiating through support for some additional projects:

  • Fuel (Mirantis’ own OpenStack deployment tool)
  • Savanna (for running Hadoop on OpenStack)
  • Murano (a service for assisting in the deployment of Windows-based services on OpenStack)

It’s fairly clear to me that at this stage in OpenStack’s lifecycle, professional services are a big play in helping organizations stand up OpenStack (few organizations lack the deep expertise to really stand up sizable installations of OpenStack on their own). However, I’m not yet convinced that building and maintaining your own OpenStack distribution is going to be as useful and valuable for the smaller players, given the pending competition from the major open source players out there. Of course, I’m not an expert, so I could be wrong.

Inktank Ceph Enterprise

Ceph, the open source distributed software system, is now coming in a fully-supported version aimed at enterprise markets. Inktank has announced Inktank Ceph Enterprise, a bundle of software and support aimed to increase adoption of Ceph among enterprise customers. Inktank Ceph Enterprise will include:

  • Open source Ceph (version 0.67)
  • New “Calamari” graphical manager that provides management tools and performance data with the intent of simplifying management and operation of Ceph clusters
  • Support services provided by Inktank; this includes technical support, hot fixes, bug prioritization, and roadmap input

Given Ceph’s integration with OpenStack, CloudStack, and open source hypervisors and hypervisor management tools (such as libvirt), it will be interesting to see how Inktank Ceph Enterprise takes off. Will the adoption of Inktank Ceph Enterprise be gated by enterprise adoption of these related open source technologies, or will it help drive their adoption? I wonder if it would make sense for Inktank to pursue some integration with VMware, given VMware’s strong position in the enterprise market. One thing is for certain: it will be interesting to see how things play out.

As always, feel free to speak up in the comments to share your thoughts on these announcements (or any other related topic). All courteous comments are welcome.

Tags: , , ,

A short while ago I posted an article that described how to use Puppet for account management. In that article, I showed you how to use virtual user resources to manage user accounts on Puppet-managed systems. In this post, I’m going to expand on that configuration so that we can also manage the initial account configuration files, and do so in the proper order.

One of the things the configuration in my first post didn’t handle was the Puppet configuration of the files in /etc/skel and making sure those files were in place before the user accounts were created. As a result, it was possible that the user account could be created on a system before the /etc/skel files were updated, and then that user account would have “unmanaged” copies of the initial configuration files. Further Puppet agent runs wouldn’t correct the problem, because the files in /etc/skel are only copied over when the account is created. If the account has already been created, then it’s too late—the files in /etc/skel must be managed before the accounts are. To fix the issue, you have to ensure that the resources are processed in a specific manner. In this post, I’ll show you how to manage that.

There are two parts to extending the Puppet accounts module to also manage some configuration files:

  1. Add a subclass to manage the files.
  2. Create a dependency between the virtual user resources and this new subclass.

Let’s look at each of these.

Adding a Subclass

To add a subclass to manage the configuration files, I created config.pp and placed it in the manifests folder for the accounts module. Here’s a simplified look at the contents of that file:

This is pretty straightforward Puppet code; it creates a managed file resource and specifies that the file be sourced from the Puppet master server. The full and actual accounts::config subclass that I’m using has a number of managed file resources, including files in /etc/skel, but I’ve omitted that here for the sake of brevity. (The other file resources that are defined look very much like the example shown, so I didn’t see any point in including them.) The config.pp also uses some values from an accounts::params subclass and some conditionals to manage different files on different operating systems.

To really put the subclass to work, though, we have to include it elsewhere. So, in the accounts module’s init.pp, we add a line that simply states include accounts::config. However, the problem that occurs if you stop there is the problem I described earlier: Puppet might create the user account before it places the file resources under management, and then the user account won’t get the updated/managed files.

To fix that, we create a dependency.

Creating a Dependency

Before running into this situation, I was pretty familiar with creating dependencies. For example, if you were defining a class for a particular daemon to run on Linux, you might use the Puppet package-file-service “trifecta”, and you might include a dependency, like this (entirely fictional) example. Note in this example that the file resource is dependent on the package resource, and the service resource is dependent on the file resource (as denoted by the capitalized Package and File instances):

(My apologies if my syntax for this fictional example isn’t perfect—I didn’t run it through puppet-lint.)

The problem in this particular case, though, is that I didn’t need a dependency on a single file; I needed a dependency on a whole group of files. To further complicate matters, the files on which the dependency existed might change between operating systems. For example, I might (and do) have different files on RHEL/CentOS than on Ubuntu/Debian. So how to accomplish this? The answer is actually quite simple: create a dependency on the subclass, not the individual resources.

So, without the dependency, the code to define the virtual users looked like this:

With the dependency, the code to define the virtual users looks like this:

The only difference between the two (other than changes in the comments at the top) is the addition of the require statement, which creates a dependency not on a single resource but instead to an entire subclass—the accounts::config subclass, which in turn has a variety of file resources that are managed according to operating system.

It’s such a simple solution I can’t believe I didn’t see it at first, and when it was pointed out to me (via the #puppet IRC channel, thanks), I had a “Duh!” moment. Even though it is a simple and straightforward solution, if I overlooked it then others might overlook it as well—a problem that hopefully this blog post will help fix.

As always, I welcome feedback from readers, so feel free to weigh in with questions, clarifications, or corrections. Courteous comments are always welcome!

Tags: , , ,

In this third post on using Mock to build RPMs for CentOS 6.3, I’m going to show you how to use Mock to build RPMs for Libvirt 1.0.1 that you can install on CentOS. As you’ll see later, this post builds on the previous two posts (one on using Mock to build RPMs for sanlock 2.4 and one on using Mock to build RPMs for libssh2 1.4.1).

Here’s a quick overview of the process:

  1. Set up Mock and the environment.
  2. Install prerequisites into the Mock environment.
  3. Build the Libvirt RPMs.

Let’s take a closer look at each of these steps.

Setting Up Mock and the Environment

The first phase in the process is to set up Mock and the environment for building the RPMs. Fortunately, this is relatively simple.

First, activate EPEL. My preferred method for activating the EPEL repository is to download the RPM, then use yum localinstall to install it, like this:

wget http://fedora.mirrors.pair.com/epel/6/i386/\
epel-release-6-8.noarch.rpm
yum localinstall epel-release-6-8.noarch.rpm

(Note that I’ve line-wrapped the URL with a backslash to make it more readable. That line-wrapped command actually works just as it is in the shell as well.)

Next, you’ll need to install Mock and related RPM-building tools:

yum install fedora-packager

Third, create a dedicated user for building RPMs. I use “makerpm” as my username, but you could use something else. Just make sure that the name makes sense to you, and that the new user is a member of the mock group:

useradd makerpm -G mock
passwd makerpm

From this point on, you’ll want to be running as this user you just created, so switch to that user with su - makerpm. This ensures that the RPMs are built under this dedicated user account.

The final step in setting up Mock and the build environment is to run the following command while running as the dedicated user you created:

rpmdev-setuptree

Now you’re ready to move on to the next phase: installing prerequisites into the Mock environment.

Installing Prerequisites Into the Mock Environment

One of the great things about Mock is that it creates an isolated chroot into which it installs all the necessary prerequisites for a particular package. This helps ensure that the package’s dependencies are managed correctly. However, if you are trying to build a package where dependencies don’t exist in the repositories, then you have to take a few additional steps. When you’re trying to build libvirt 1.0.1 RPMs for use with CentOS 6.3, you’ll find yourself in exactly this situation. Libvirt has dependencies on newer versions of sanlock-devel and libssh2-devel than are available in the repositories.

Fortunately, there is a workaround—and here’s where those other posts on Mock will come in handy. Use the instructions posted here to build RPMs for sanlock 2.4, and use the instructions here to build RPMs for libssh2 1.4.1.

Once the RPMs are built (and they should build without any major issues, based on my testing), then use these commands to get them into the isolated Mock environment (I’ve line-wrapped here with backslashes for readability):

mock -r epel-6-x86_64 --init
mock -r epel-6-x86_64 --install \
~/rpmbuild/RPMS/sanlock-lib-2.4-3.el6.x86_64.rpm \
~/rpmbuild/RPM/sanlock-devel-2.4-3.el6.x86_64.rpm \
~/rpmbuild/RPMS/libssh2-1.4.1-2.el6.x86_64.rpm \
~/rpmbuild/RPMS/libssh2-devel-1.4.1-2.el6.x86_64.rpm

This will install these packages into the Mock environment, not onto the general Linux installation.

Once you’ve gotten these packages compiled and installed, then you’re ready for the final phase: building the libvirt RPMs.

Building the Libvirt RPMs

As in my earlier post on compiling Libvirt 1.0.1 RPMs for CentOS 6.3, this final step is almost anti-climactic. That’s good, though, because it means you’ve done all the previous steps perfectly.

First, fetch the source RPM from the libvirt HTTP server:

wget http://libvirt.org/sources/libvirt-1.0.1-1.fc17.src.rpm

Next, move the source RPM into the ~/rpmbuild/SRPMS directory:

mv libvirt-1.0.1-1.fc17.src.rpm ~/rpmbuild/SRPMS

Finally, run Mock to rebuild the RPMs:

mock -r epel-6-x86_64 --no_clean ~/rpmbuild/SRPMS/libvirt-1.0.1-1.fc17.src.rpm

Note that the --no-clean parameter is required here to prevent Mock from cleaning out the chroot and getting rid of the packages you installed into the environment earlier.

This command should run without any errors or problems, and produce a set of RPMs (typically) found in /var/lib/mock/epel-6-x86_64/results. You can then take these RPMs and install them on another CentOS 6.3 system using yum localinstall.

Testing the RPMs

To verify that everything worked as expected, I tested the RPMs using these steps:

  1. Using a clean CentOS 6.3 VM (built using the “Minimal Desktop” option), I used yum groupinstall to install the Virtualization, Virtualization Client, Virtualization Platform, and Virtualization Tools groups. This installed version 0.9.10 of libvirt.

  2. I then installed the updated version of libvirt using yum localinstall. I had to specify the dependencies manually on the command line; I anticipate that had I been using a real repository, this would not have been the case. The updated libvirt, sanlock, and libssh2 packages all installed correctly.

  3. I started the libvirtd service (it worked), and ran virsh --version. It returned version 1.0.1.

I imagine there might be more comprehensive/better ways of testing the RPMs that I built, but they seemed to work fine on my end. If anyone has any other suggestions for how we can double-check to ensure the packages are working correctly, feel free to speak up in the comments below. I also welcome any other corrections, suggestions, or questions in the comments. Courteous comments are always welcome.

Tags: , , , , ,

As with the related post on using Mock to rebuild sanlock 2.4 for CentOS 6.3, this post might seem a bit odd. Don’t worry—I’ll tie it into something else very soon. In this post, I’ll show you how to use Mock to build RPMs for libssh2 1.4.1 for use with CentOS 6.3.

The information in this post is based on information found in two other very helpful pages:

Using Mock to test package builds
How to rebuild a package from Fedora or EPEL for RHEL, CentOS, or SL

I tested these instructions on a newly-built CentOS 6.3 VM, installed using the “Minimal Desktop” option. I haven’t tested it on other RHEL variants or other versions, so keep that in mind.

First, you’ll want to activate EPEL. You’ll do that by downloading the RPM and using yum localinstall to install it. You can also use rpm to install it directly from the URL, but I prefer using yum localinstall. I’ve line-wrapped the EPEL URL with a backslash for readability.

wget http://fedora.mirrors.pair.com/epel/6/i386/\
epel-release-6-8.noarch.rpm
yum localinstall epel-release-6-8.noarch.rpm

Once EPEL is installed, then install Mock and related tools:

yum install fedora-packager

This will download and install Mock and related tools such as rpmbuild.

Next, create a user under which you’ll run all these commands, and make sure this account is a member of the mock group:

useradd makerpm -G mock
passwd makerpm

From here on, you’ll want to be running as this user you just created, so switch to that user with su - makerpm.

The first step while running as the user you created is to setup the RPM build environment:

rpmdev-setuptree

Now that the directory structure is created, use wget to download the source RPM for libssh2 1.4.1-2 from the Fedora 17 release repository (the URL is line-wrapped here for readability):

wget http://dl.fedoraproject.org/pub/fedora/linux/releases/17\
/Everything/source/SRPMS/l/libssh2-1.4.1-2.fc17.src.rpm

Move the source RPM to the rpmbuild/SRPMS directory:

mv libssh2-1.4.1-2.fc17.src.rpm ~/rpmbuild/SRPMS

And, finally, rebuild the RPMs with mock:

mock --rebuild ~/rpmbuild/SRPMS/libssh2-1.4.1-2.fc17.src.rpm

Assuming everything completes successfully (it did on my CentOS 6.3 VM), then you’ll end up with a group of RPMs found in /var/lib/mock/epel-6-x86_64/results (the exact directory will vary based on OS version and build; I was using 64-bit CentOS 6.3). You should then be able to install those RPMs onto a CentOS 6.3 system using yum localinstall and the prerequisites will be managed properly.

Have fun!

Tags: , ,

The topic of this post might seem a bit strange, but it will all make sense later. In this post, I’ll show you how to use Mock to build RPMs for sanlock 2.4 for use with CentOS 6.3.

The information in this post is based on information found in two other very helpful pages:

Using Mock to test package builds
How to rebuild a package from Fedora or EPEL for RHEL, CentOS, or SL

I tested these instructions on a newly-built CentOS 6.3 VM, installed using the “Minimal Desktop” option. I haven’t tested it on other RHEL variants or other versions, so keep that in mind.

First, you’ll want to activate EPEL. You’ll do that by downloading the RPM and using yum localinstall to install it. You can also use rpm to install it directly from the URL, but I prefer using yum localinstall. (Note that the URL for the EPEL RPM is line-wrapped here for readability.)

wget http://fedora.mirrors.pair.com/epel/6\
i386/epel-release-6-8.noarch.rpm
yum localinstall epel-release-6-8.noarch.rpm

Once EPEL is installed, then install Mock and related tools:

yum install fedora-packager

This will download and install Mock and related tools such as rpmbuild.

Next, create a user under which you’ll run all these commands, and make sure this account is a member of the mock group:

useradd makerpm -G mock
passwd makerpm

From here on, you’ll want to be running as this user you just created, so switch to that user with su - makerpm.

The first step while running as the user you created is to setup the RPM build environment:

rpmdev-setuptree

Now that the directory structure is created, use wget to download the source RPM for sanlock 2.4-3 from the Fedora 17 update repository (the URL is line-wrapped here for readability):

wget http://dl.fedoraproject.org/pub/fedora/linux/updates\
/17/SRPMS/sanlock-2.4-3.fc17.src.rpm

Move the source RPM to the rpmbuild/SRPMS directory:

mv sanlock-2.4-3.fc17.src.rpm ~/rpmbuild/SRPMS

And, finally, rebuild the RPMs with mock:

mock --rebuild ~/rpmbuild/SRPMS/sanlock-2.4-3.fc17.src.rpm

Assuming everything completes successfully (it did on my CentOS 6.3 VM), then you’ll end up with a group of RPMs found in /var/lib/mock/epel-6-x86_64/results (the exact directory will vary based on OS version and build; I was using 64-bit CentOS 6.3). You should then be able to install those RPMs onto a CentOS 6.3 system using yum localinstall and the prerequisites will be managed properly.

Enjoy!

Tags: , ,

In previous articles, I’ve shown you how to compile libvirt 0.10.1 on CentOS 6.3, but—as several readers have pointed out in the comments to that and other articles—compiling packages from source may not be the best long-term approach. Not only does it make it difficult to keep the system up-to-date, it also makes automating the configuration of the host rather difficult. In this post, I’ll show you how to rebuild a source RPM for libvirt 1.0.1 so that it will install (and work) under CentOS 6.3. (These instructions should work for RHEL 6.3, too, but I haven’t tested them.)

Overview

The process for rebuilding a source RPM isn’t too terribly difficult, assuming that you can get the dependencies worked out. Here’s a quick look at the steps involved:

  1. Create a set of build directories for source RPMs.
  2. Download the source RPM and install all prerequisites/dependencies onto the system.
  3. Rebuild the source RPM for the destination system.

Let’s take a look at each of these steps in a bit more detail.

Create the Build Environment

The CentOS wiki has a great page for how to set up an RPM build environment. I won’t repeat all the steps here (refer to the wiki page instead), but here’s a quick summary of what’s involved:

  1. Install the necessary packages (typically you’ll need the rpmbuild, redhat-rpm-config, make, and gcc packages). You might also need certain development libraries, but this will vary according the source RPMs you’re rebuilding (more on that in the next section).
  2. Create the necessary directories under your home directory.

I’ll assume that you’ve followed the steps outlined in the CentOS wiki to set up your environment appropriately before continuing with the rest of this process.

Download the Source RPM and Install Prerequisites

The libvirt 1.0.1 source RPMs are available directly from the libvirt HTTP server, easily downloaded with wget:

wget http://libvirt.org/sources/libvirt-1.0.1-1.fc17.src.rpm

You can just download the source RPM to your home directory. Before you can build a new RPM from the source RPM, though, you’ll first need to install all the various prerequisites that libvirt requires. Most of them can be installed easily using yum with a command like this:

yum install xhtml1-dtds augeas libudev-devel \
libpci-access-devel yajl-devel, libpcap-devel libnl-devel \
avahi-devel radvd ebtables qemu-img iscsi-initiator-utils \
parted-devel device-mapper-devel numactl-devel netcfg-devel \
systemtap-sdt-devel scrub numad libblkid-devel

There are two dependencies, though—sanlock and libssh2—that require versions newer than what are available in the CentOS/EPEL repositories. For those, you’ll need to recompile your own RPMs. Fortunately, this is pretty straightforward. The next two sections provide more details on getting these prerequisites handled.

Building an RPM for sanlock

To build a CentOS 6.3 RPM for version 2.4 of sanlock (the minimum version needed by libvirt 1.0.1), first use wget to download a Fedora 17 version of the source RPM. I’ve wrapped the URL with a backslash for readability:

wget http://dl.fedoraproject.org/pub/fedora/linux/updates/17/SRPMS\
/sanlock-2.4-3.fc17.src.rpm

Next, install an prerequisite library using yum install libaio-devel.

Finally, use rpmbuild to rebuild the sanlock source RPM:

rpmbuild --rebuild sanlock-2.4-3.fc17.src.rpm

This process should proceed without any problems. The resulting RPMs that are created will be found in ~/rpmbuild/RPMS/x86_64 (assuming you are, as I am, using a 64-bit build of CentOS).

Building the RPMs, however, isn’t enough—you need to install them so that you can build the libvirt RPMs. So install the sanlock-devel and sanlock-lib RPMs using yum locainstall (do this command from the directory where the RPMs reside):

yum localinstall sanlock-devel-* sanlock-lib-*

That should take care of the sanlock dependency.

Building an RPM for libssh2

To build a CentOS 6.3 RPM for version 1.4.1 of libssh2 (libvirt 1.0.1 requires at least version 1.3.0), first download the source RPM using wget (I’ve wrapped the URL here for readability):

wget http://dl.fedoraproject.org/pub/fedora/linux/releases\
/17/Everything/source/SRPMS/l/libssh2-1.4.1-2.fc17.src.rpm

(That’s a lowercase L in the URL just after SRPMS.)

Once you have the source RPM downloaded, then just rebuild the source RPM:

rpmbuild --rebuild libssh2-1.4.1-2.fc17.src.rpm

Then install the resulting RPMs using yum localinstall:

yum localinstall libssh2-1.4.1-* libssh2-devel-*

That takes care of the last remaining dependency. You’re now ready to compile the RPMs for libvirt 1.0.1.

Build the Libvirt RPM

This part is almost anticlimactic. Just use the rpmbuild command:

rpmbuild --rebuild libvirt-1.0.1-1.fc17.src.rpm

If you’ve successfully installed all the necessary prerequisites, then the RPM compilation process should proceed without any issues.

Once the RPM compilation process is complete, you’ll find libvirt 1.0.1 RPMs in the ~/rpmbuild/RPMS/x86_64 directory (assuming a 64-bit version of CentOS) which you can easily install with yum localinstall, or post to your own custom repository.

I hope this post helps someone. If you have any questions, or if you spot an error, please speak up in the comments below. All courteous comments are welcome!

Tags: , , , ,

« Older entries