Automation

You are currently browsing articles tagged Automation.

For the last couple of years, I’ve been sharing my annual “projects list” and then grading myself on the progress (or lack thereof) on the projects at the end of the year. For example, I shared my 2012 project list in early January 2012, then gave myself grades on my progress in early January 2013.

In this post, I’m going to grade myself on my 2013 project list. Here’s the project list I posted just under a year ago:

  1. Continue to learn German.
  2. Reinforce base Linux knowledge.
  3. Continue using Puppet for automation.
  4. Reinforce data center networking fundamentals.

So, how did I do? Here’s my assessment of my progress:

  1. Continue to learn German: I have made some progress here, though certainly not the progress that I wanted to learn. I’ve incorporated the use of Memrise, which has been helpful, but I still haven’t made the progress I’d like. If anyone has any other suggestions for additional tools, I’m open to your feedback. Grade: D (below average)

  2. Reinforce base Linux knowledge: I’ve been suggesting to VMUG attendees that they needed to learn Linux, as it’s popping up all over the place in all sorts of roles. In my original 2013 project list, I said that I was going to focus on RHEL and RHEL variants, but over the course of the year ended up focusing more on Debian and Ubuntu instead (due to more up-to-date packages and closer alignment with OpenStack). Despite that shift in focus, I think I’ve made decent progress here. There’s always room to grow, of course. Grade: B (above average)

  3. Continue using Puppet for automation: I’ve made reasonable progress here, expanding my use of Puppet to include managing Debian/Ubuntu software repositories (see here and here for examples), managing SSH keys, managing Open vSwitch (OVS) via a third-party module, and—most recently—exploring the use of Puppet with OpenStack (no blog posts—yet). There’s still quite a bit I need to learn (some of my manifests don’t work quite as well as I’d like), but I did make progress here. Grade: C (average)

  4. Reinforce data center networking fundamentals: Naturally, my role at VMware has me spending a great deal of time on how network virtualization affects DC networking, and this translated into some progress on this project. While I gained solid high-level knowledge on a number of DC networking topics, I think I was originally thinking I needed more low-level “in the weeds” knowledge. In that regard, I don’t feel like I did well; on the flip side, though, I’m not sure whether I really needed more low-level “in the weeds” knowledge. This highlights a key struggle for me personally: how to balance the deep, “in the weeds” knowledge with the high-level knowledge. Suggestions on how others have overcome this challenge are welcome. Grade: C (average)

In summary: not bad, but could have been better!

What’s not reflected in this project list is the progress I made with understanding OpenStack, or my deepened level of knowledge of OVS (just browse articles tagged OVS for an idea of what I’ve been doing in that area).

Over the next week or two, I’ll be reflecting on my progress with my 2013 projects and thinking about what projects I should be taking in 2014. In the meantime, I would love to hear any feedback, suggestions, or thoughts on projects I should consider, technologies that should be incorporated, or learning techniques I should leverage. Feel free to speak up in the comments below.

Tags: , , , , , , ,

I’m back with another “Reducing the Friction” blog post, this time to talk about training an e-mail spam filter. As you may recall if you read one of the earlier posts (may I suggest this one and this one?), I use the phrase “reducing the friction” to talk about streamlining and simplifying commonly-performed tasks so as to allow you to improve your own personal efficiency and/or adopt more efficient habits or workflows.

I recently moved my e-mail services off Google and onto Fastmail. (I described the reasons why I made this move in this post.) Fastmail has been great—I find their e-mail service to be much faster than what I was seeing with Google. The one drawback, though, has been an increase in spam. Not a huge increase, mind you, but enough to notice. Fastmail recommends that you help train your personal spam filter by moving messages into a folder you designate, and then telling their systems to consider everything in that folder to be spam. While that’s not hard, it’s also not very streamlined, so I took up the task of making it even easier and faster.

(Note that, as a Mac user, most of my tips focus on Mac applications. If you’re a user of another platform, I do apologize—but I can only speak about what I use myself.)

To help make this easier, I came up with this bit of AppleScript:

(Click here if you don’t see a code block above this paragraph.)

To make this work on your system, all you need to do is just change the two property declarations at the top. Set them to the correct values for your system.

As you can tell by the comments in the code, this script was designed to be run from within Apple’s Mail app itself. To make that easy, I use a fantastic tool called FastScripts (highly recommended!). Using FastScripts, I can easily designate an application-specific shortcut key (I use Ctrl-Cmd-S) to invoke the script from within Apple Mail. Boom! Just like that, you now have a super-easy way to both help speed up processing your e-mail as well as helping train your personal spam filter. (Note: if you are also a FastMail customer, refer to the FastMail help screens while logged in to get more details on marking a folder for spam learning.)

I hope this helps someone out there!

Tags: , , ,

In this post, I’m going to show you how to manage Open vSwitch (OVS) using the popular open source configuration management tool Puppet. This is not the first time I’ve written about this topic; in the past I showed you how to automate OVS configuration with Puppet via a hack utilizing some RHEL-OVS integrations. This post, however, focuses on the use of an actual Puppet module that will manage the configuration of OVS, a much cleaner solution—in my view, at least—than leveraging the file-based integrations I discussed earlier.

The Puppet module I’ll be using and discussing in this post is the L23Network module (found here on GitHub). This is an extremely flexible and useful module, capable of not only configuring and managing network interfaces but also capable of managing the configuration of OVS. The latter functionality—managing the configuration of OVS—will be the primary focus of this article (with one exception).

The L23Network module is pretty well-documented, so I won’t bother regurgitating the documentation here. Instead, I’ll just try to provide some specific examples, and tie those examples back to some of the various OVS configurations I’ve shown you in earlier posts.

First, let’s get “the one exception” I mentioned earlier out of the way. In OVS environments, you’ll often need to bring up a physical interface without assigning that interface an IP address. For example, consider a physical interface that is providing bridged connectivity to guest domains (VMs) on an OVS bridge. You’ll want the interface to be up, but the interface does not need an IP address. Using the L23Network module, you can accomplish that with this piece of code in your manifest:

l23network::l3::ifconfig {'eth1': ipaddr => 'none'}

Now that eth1 is up, you could create a bridge to which to attach it with this code:

l23network::l2::bridge {'br-ex': }

And then you could actually attach eth1 like this:

l23network::l2::port {'eth1': bridge => 'br-ex'}

You could then provide multi-VLAN bridged connectivity to guest domains via libvirt as I explained in my post on using VLANs with libvirt and OVS. (Or, if you are using LXC with libvirt and OVS, you could provide multi-VLAN bridged connectivity to containers.)

The L23Network module can also work with other types of interfaces, not just physical interfaces. Want to create an internal interface, perhaps to use as a tunnel endpoint for GRE tunnel as I described here? Use this snippet of Puppet code:

l23network::l2::port {'tep0': bridge => 'br-tun', type => 'internal'}

You could then assign the newly-created tep0 interface an IP address on your transport network like this:

l23network::l3::ifconfig {'tep0': ipaddr => '10.1.1.1/24'}

(In theory, you could also use the L23Network module to create an internal interface so as to run host management through OVS, but then you could run into issues communicating with the Puppet server over the same interfaces the Puppet server is configuring.)

I haven’t yet used L23Network to create/manage patch ports or GRE ports, but the documentation indicates the module is capable of doing so. This is an area that I plan to explore in a bit more detail in the near future (in my copious free time).

Based on the snippets I’ve given you above, it should be pretty straightforward how to combine these various pieces together to fully configure and manage OVS instances across a large number of systems. However, if you have any questions, feel free to post them in the comments below. I also welcome all other courteous feedback; you are encouraged to start (or join) the conversation.

Tags: , , , , , ,

Some time ago, I showed you how to use Puppet to add Ubuntu Cloud Archive support to your Ubuntu installation. Since that time, OpenStack has had a new release (the Havana release) and the Ubuntu Cloud Archive repository has been updated with new packages to support the Havana release. In this post, I’ll show you an updated snippet of code to take advantage of these newer packages in the Ubuntu Cloud Archive repository.

For reference, here’s the original Puppet code I posted in the first article:

(If you can’t see the code snippet above, please click here.)

That points your Ubuntu installation to the Grizzly packages.

Here’s updated code that will point your installation to the appropriate packages to support OpenStack’s Havana release:

(Click here if you can’t see the code snippet above.)

As you can see, there is only one small change between the two code snippets: changing “precise-updates/grizzly” in the first to “precise-updates/havana” in the second. (Naturally, this assumes you’re using Ubuntu 12.04, the latest LTS release as of this writing.) I know this seems like a pretty simple thing to post, but I wanted to include it here for the sake of completeness and the benefit of future readers.

Feel free to speak up in the comments with any questions, suggestions, or corrections.

Tags: , , , ,

Welcome to Technology Short Take #36. In this episode, I’ll share a variety of links from around the web, along with some random thoughts and ideas along the way. I try to keep things related to the key technology areas you’ll see in today’s data centers, though I do stray from time to time. In any case, enough with the introduction—bring on the content! I hope you find something useful.

Networking

  • This post is a bit older, but still useful in the event if you’re interested in learning more about OpenFlow and OpenFlow controllers. Nick Buraglio has put together a basic reference OpenFlow controller VM—this is a KVM guest with CentOS 6.3 with the Floodlight open source controller.
  • Paul Fries takes on defining SDN, breaking it down into two “flavors”: host dominant and network dominant. This is a reasonable way of grouping the various approaches to SDN (using SDN in the very loose industry sense, not the original control plane-data plane separation sense). I’d like to add to Paul’s analysis that it’s important to understand that, in reality, host dominant and network dominant systems can coexist. It’s not at all unreasonable to think that you might have a fabric controller that is responsible for managing/optimizing traffic flows across the physical transport network/fabric, and an overlay controller—like VMware NSX—that integrates tightly with the hypervisor(s) and workloads running on those hypervisors to create and manage logical connectivity and logical network services.
  • This is an older post from April 2013, but still useful, I think. In his article titled “OpenFlow Test Deployment Options“, Brent Salisbury—a rock star new breed network engineer emerging in the new world of SDN—discusses some practical deployment strategies for deploying OpenFlow into an existing network topology. One key statement that I really liked from this article was this one: “SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking.” New technologies don’t make talented folks who embrace change obsolete; if anything, these new technologies make them more valuable.
  • Great post by Ivan (is there a post by Ivan that isn’t great?) on flow table explosion with OpenFlow. He does a great job of explaining how OpenFlow works and why OpenFlow 1.3 is needed in order to see broader adoption of OpenFlow.

Servers/Hardware

  • Intel announced the E5 2600 v2 series of CPUs back at Intel Developer Forum (IDF) 2013 (you can follow my IDF 2013 coverage by looking at posts with the IDF2013 tag). Kevin Houston followed up on that announcement with a useful post on vSphere compatibility with the E5 2600 v2. You can also get more details on the E5 2600 v2 itself in this related post by Kevin as well. (Although I’m just now catching Kevin’s posts, they were published almost immediately after the Intel announcements—thanks for the promptness, Kevin!)
  • blah

Security

Nothing this time around, but I’ll keep my eyes posted for content to share with you in future posts.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this refresher on some of the most useful apt-get/apt-cache commands to be helpful. I don’t use some of them on a regular basis, and so it’s hard to remember the specific command and/or syntax when you do need one of these commands.
  • I wouldn’t have initially considered comparing Docker and Chef, but considering that I’m not an expert in either technology it could just be my limited understanding. However, this post on why Docker and why not Chef does a good job of looking at ways that Docker could potentially replace certain uses for Chef. Personally, I tend to lean toward the author’s final conclusions that it is entirely possible that we’ll see Docker and Chef being used together. However, as I stated, I’m not an expert in either technology, so my view may be incorrect. (I reserve the right to revise my view in the future.)

Storage

  • Using Dell EqualLogic with VMFS? Better read this heads-up from Cormac Hogan and take the recommended action right away.
  • Erwin van Londen proposes some ideas for enhancing FC error detection and notification with the idea of making hosts more aware of path errors and able to “route” around them. It’s interesting stuff; as Erwin points out, though, even if the T11 accepted the proposal it would be a while before this capability showed up in actual products.

Virtualization

That’s it for this time around, but feel free to continue to conversation in the comments below. If you have any additional information to share regarding any of the topics I’ve mentioned, please take the time to add that information in the comments. Courteous comments are always welcome!

Tags: , , , , , , , , , , , ,

In this post, I’ll show you how I extended my solution for managing user accounts with Puppet to include managing SSH authorized keys. With this solution in place, user accounts managed through Puppet can also include their SSH public key, and that public key will automatically be installed on hosts where the account is realized. All in all, I think it’s a pretty cool solution.

Just to refresh your memory, here’s the original Puppet manifest code I posted in the original article; this code uses define-based virtual user resources that you then realize on a per-host basis.

(If the code block showing the Puppet code isn’t appearing above, click here.)

Since I posted this original code, I’ve made a few changes. I switched some of the hard-coded values to parameters (stored in a separate subclass), and I made a few stylistic/syntactic changes based on running the code through puppet-lint. But, by and large, this is still quite similar to the code I’m running right now.

Here’s the code after I modified it to include managing SSH authorized keys for user accounts:

(Can’t see the code block? Click here.)

Let’s walk through the changes between the two snippets of code:

  • Two new parameters are added, $sshkeytype and $sshkey. These parameters hold, quite naturally, the SSH key type and the SSH key itself.
  • Several values are parameterized, pulling values from the accounts::params manifest.
  • You can note a number of stylistic and syntactical changes.
  • The accounts::virtual class now includes a stanza using the built-in ssh_authorized_key resource type. This is the real heart of the changes—by adding this to the virtual user resource, it makes sure that when users are realized, their SSH public keys are added to the host.

With this code in place, you’d then define a user like this:

(Click here if the code block doesn’t appear above.)

The requirement for Class[‘accounts::config'] is to ensure that various configuration tasks are finished before the user account is defined; I discussed this in more detail in this post on Puppet, user accounts, and configuration files. Now, when I realize a virtual user resource, Puppet will also ensure that the user’s SSH public key is automatically added to the user’s .ssh/authorized_keys file on that host. Pretty sweet, eh? Further, if the key ever changes, you need only change it on the Puppet server itself, and on the next Puppet agent run the hosts will update themselves.

I freely admit that I’m not a Puppet expert, so there might be better/faster/more efficient ways of doing this. If you are a Puppet expert, please feel free to weigh in below in the comments. I welcome all courteous comments!

Tags: , , ,

In this post, I’ll share with you some Puppet code that you can include in your manifests to install Open vSwitch (OVS) packages on Ubuntu. This post, along with a number of others (like using Puppet for Ubuntu Cloud Archive support or using Puppet to configure an Apt proxy) stems from my work on building a new home lab in which I’ll be doing some OpenStack and NSX testing.

This code makes a couple of assumptions:

  1. It assumes that you’ve established an internal Apt repository (I created one using reprepro). In the code below, you’ll see that I’ve used the Puppet Labs Apt module to define my internal Apt repository.
  2. It assumes that you have Debian packages for OVS in that internal Apt repository. Depending on which version of OVS you need (I needed a newer version than was available in the public repositories), you might be able to get away with just using the public repositories.

OK, with the assumptions out of the way, let’s have a look at the code:

(Click here if the code block above isn’t visible.)

The code is fairly straightforward; the key is making sure that the appropriate packages are installed before you attempt to install the OVS DKMS module. This is reflected in the require statement for the openvswitch-datapath-dkms package.

I’ve only tested this on Ubuntu 12.04 LTS, so use at your own risk on other distributions and other versions.

As always, I encourage you to participate in the discussion by adding your questions, thoughts, suggestions, and/or clarifications in the comments below. All courteous comments are welcome.

Tags: , , , , ,

Win a Copy of Pro Puppet

Want to win a free copy of a book on Puppet? I recently came into possession of a second copy of Pro Puppet, a good book for those looking to get a bit deeper into declarative configuration management with Puppet. Since I can’t use two copies, I’m giving one of them away to a lucky winner.

Here’s how to enter:

  1. Leave a comment here on this site. Be sure to use a valid e-mail address, because that’s what I’ll use to contact you.
  2. In your comment, you must include a specific example of how you’d like to use Puppet in your environment to solve a configuration management problem. For example: “I’d like to learn to use Puppet to manage the configuration of my Apache web servers.” Any comments that don’t include a specific example of how you’d use Puppet won’t be considered in the final drawing.

Since the shipping for the book is coming out of my own pocket, I’ll have to limit this to US-based readers only (sorry international readers!). The shipping costs outside the US are simply too prohibitive.

Good luck!

UPDATE: The contest has been closed and the winner has been notified. Thanks to everyone who entered!

Tags: ,

In this post, I’ll share a brief snippet of Puppet code that allows you to automatically configure Ubuntu clients to use Apt-Cacher-NG. By leveraging Apt-Cacher-NG, running apt-get commands on your Ubuntu instances will generally be faster because the Apt-Cacher-NG server will cache information locally instead of requiring that every command go out to the source repositories. In my own lab I’ve seen a tremendous speed boost on installing updates and frequently-used packages on my Ubuntu instances. You can get more information on Apt-Cacher-NG on the Apt-Cacher-NG website.

(Note: The Puppet code in this post relies upon the same Puppet Labs apt module that I used in my post on using Puppet to configure Ubuntu to use the Ubuntu Cloud Archive.)

This snippet of Puppet code will take care of configuring apt to use a local Apt-Cacher-NG instance:

(In the event the code block above isn’t shown, you can also see it here.)

This is a really simple block of code, but I’m publishing it here just for the sake of completeness and in the remote event someone else will find it useful. Because this a distro-specific thing (only applies to Debian and Debian derivatives like Ubuntu), you might want to wrap this in a conditional (like If $::osfamily == ‘Debian' or similar) to prevent errors in the event this manifest is (accidentally) applied to a non-Debian distribution.

Questions, corrections, and other feedback are welcome, so feel free to speak up in the comments below.

Tags: , , ,

In this post, I’ll share a snippet of Puppet code that I am using to automatically configure Ubuntu Server 12.04 systems to use the Ubuntu Cloud Archive (which allows access to packaged versions of OpenStack for use with LTS releases of Ubuntu Server, like 12.04).

As you may already know, I recently acquired two off-lease Dell PowerEdge C6100 systems. Each of these units has four trays; each tray is a dual-socket, quad-core server with 24GB of RAM. This gives me a total of eight servers, and the plan is to use them to build an internal OpenStack cloud running Ubuntu 12.04, KVM, Open vSwitch (OVS), and—ultimately—VMware NSX. It’s a fairly ambitious goal, but if you don’t stretch yourself you’ll never grow.

In any case, along the way I’m trying to make the whole process as repeatable and automated as possible, and naturally that’s where Puppet comes into play. I’ve been working through an automated Ubuntu Server install via PXE and an internal HTTP repository (I’ll do a separate post for that), but as part of my testing I wanted to be sure that I could automatically configure the Ubuntu Server instances to use the Ubuntu Cloud Archive for access to OpenStack packages. While this isn’t necessarily hard, I did want to share the Puppet code I’m using just in case it might help someone else.

First off, you’ll want to get your hands on the Puppet Labs apt module from the Forge. Once you’ve gotten that installed on your Puppet server (a simple puppet module install puppetlabs/apt on any recent version of Puppet should knock that out for you), then you can use this snippet of code in a manifest:

(In case the code above doesn’t show up, you can also view it here.)

Once you put this into the Puppet manifest and then refresh the system’s configuration, you should see a file named ubuntu-cloud.list appear in the /etc/apt/sources.list.d directory on your Ubuntu system. (By the way, I usually wrap that code in a conditional like if $::operatingsystem == ‘Ubuntu' or similar.) Once that file is there, simply run apt-get update and you should now be able to install packages from the Ubuntu Cloud Archive.

Have fun!

Tags: , , , ,

« Older entries