You are currently browsing articles tagged BSD.

Welcome to Technology Short Take #43, another episode in my irregularly-published series of articles, links, and thoughts from around the web, focusing on data center technologies like networking, virtualization, storage, and cloud computing. Here’s hoping you find something useful.


  • Jason Edelman recently took a look at Docker networking. While Docker is receiving a great deal of attention, I have to say that I feel Docker networking is a key area that hasn’t received the amount of attention that it probably needs. It would be great to see Docker get support for connecting containers directly to Open vSwitch (OVS), which is generally considered the de facto standard for networking on Linux hosts.
  • Ivan Pepelnjak asks the question, “Is OpenFlow the best tool for overlay virtual networks?” While so many folks see OpenFlow as the answer regardless of the question, Ivan takes a solid look at whether there are better ways of building overlay virtual networks. I especially liked one of the last statements in Ivan’s post: “Wouldn’t it be better to keep things simple instead of introducing yet-another less-than-perfect abstraction layer?”
  • Ed Henry tackles the idea of abstraction vs. automation in a fairly recent post. It’s funny—I think Ed’s post might actually be a response to a Twitter discussion that I started about the value of the abstractions that are being implemented in Group-based Policy (GBP) in OpenStack Neutron. Specifically, I was asking if there was value in creating an entirely new set of abstractions when it seemed like automation might be a better approach. Regardless, Ed’s post is a good one—the decision isn’t about one versus the other, but rather recognizing, in Ed’s words, “abstraction will ultimately lead to easier automation.” I’d agree with that, with one change: the right abstraction will lead to easier automation.
  • Jason Horn provides an example of how to script NSX security groups.
  • Interested in setting up overlays using Open vSwitch (OVS)? Then check out this article from the ever-helpful Brent Salisbury on setting up overlays on OVS.
  • Another series on VMware NSX has popped up, this time from Jon Langemak. Only two posts so far (but very thorough posts), one on setting up VMware NSX and another on logical networking with VMware NSX.


Nothing this time around, but I’ll keep my eyes open for more content to include next time.


  • Someone mentioned I should consider using pfctl and its ability to automatically block remote hosts exceeding certain connection rate limits. See here for details.
  • Bromium published some details on a Android security flaw that’s worth reviewing.

Cloud Computing/Cloud Management

  • Want to add some Docker to your vCAC environment? This post provides more details on how it is done. Kind of cool, if you ask me.
  • I am rapidly being pulled “higher” up the stack to look at tools and systems for working with distributed applications across clusters of servers. You can expect to see some content here soon on topics like fleet, Kubernetes, Mesos, and others. Hang on tight, this will be an interesting ride!

Operating Systems/Applications

  • A fact that I think is sometimes overlooked when discussing Docker is access to the Docker daemon (which, by default, is accessible only via UNIX socket—and therefore accessible locally only). This post by Adam Stankiewicz tackles configuring remote TLS access to Docker, which addresses that problem.
  • CoreOS is a pretty cool project that takes a new look at how Linux distributions should be constructed. I’m kind of bullish on CoreOS, though I haven’t had nearly the time I’d like to work with it. There’s a lot of potential, but also some gotchas (especially right now, before a stable product has been released). The fact that CoreOS takes a new approach to things means that you might need to look at things a bit differently than you had in the past; this post tackles one such item (pushing logs to a remote destination).
  • Speaking of CoreOS: here’s how to test drive CoreOS from your Mac.
  • I think I may have mentioned this before; if so, I apologize. It seems like a lot of folks are saying that Docker eliminates the need for configuration management tools like Puppet or Chef. Perhaps (or perhaps not), but in the event you need or want to combine Puppet with Docker, a good place to start is this article by James Turnbull (formerly of Puppet, now with Docker) on building Puppet-based applications inside Docker.
  • Here’s a tutorial for running Docker on CloudSigma.


  • It’s interesting to watch the storage industry go through the same sort of discussion around what “software-defined” means as the networking industry has gone through (or, depending on your perspective, is still going through). A few articles highlight this discussion: this one by John Griffith (Project Technical Lead [PTL] for OpenStack Cinder), this response by Chad Sakac, this response by the late Jim Ruddy, this reply by Kenneth Hui, and finally John’s response in part 2.


  • The ability to run nested hypervisors is the primary reason I still use VMware Fusion on my laptop instead of switching to VirtualBox. In this post Cody Bunch talks about how to use Vagrant to configure nested KVM on VMware Fusion for using things like DevStack.
  • A few different folks in the VMware space have pointed out the VMware OS Optimization Tool, a tool designed to help optimize Windows 7/8/2008/2012 systems for use with VMware Horizon View. Might be worth checking out.
  • The VMware PowerCLI blog has a nice three part series on working with Customization Specifications in PowerCLI (part 1, part 2, and part 3).
  • Jason Boche has a great collection of information regarding vSphere HA and PDL. Definitely be sure to give this a look.

That’s it for this time around. Feel free to speak up in the comments and share any thoughts, clarifications, corrections, or other ideas. Thanks for reading!

Tags: , , , , , , , , , , ,

I’ve written before about adding an extra layer of network security to your Macintosh by leveraging the BSD-level ipfw firewall, in addition to the standard GUI firewall and additional third-party firewalls (like Little Snitch). In OS X Lion and OS X Mountain Lion, though, ipfw was deprecated in favor of pf, the powerful packet filter that I believe originated on OpenBSD. (OS X’s version of pf is ported from FreeBSD.) In this article, I’m going to show you how to use pf on OS X.

Note that this is just one way of leveraging pf, not necessarily the only way of doing it. I tested (and am currently using) this configuration on OS X Mountain Lion 10.8.3.

There are X basic pieces involved in getting pf up and running on OS X Mountain Lion:

  1. Putting pf configuration files in place.
  2. Creating a launchd item for pf.

Let’s look at each of these pieces in a bit more detail. We’ll start with the configuration files.

Putting Configuration Files in Place

OS X Mountain Lion comes with a barebones /etc/pf.conf preinstalled. This barebones configuration file references a single anchor, found in /etc/pf.anchors/ This anchor, however, does not contain any actual pf rules; instead, it appears to be nothing more than a placeholder.

Since there is a configuration file already in place, you have two options ahead of you:

  1. You can overwrite the existing configuration file. The drawback of this approach is that a) Apple has been known to change this file during system updates, undoing your changes; and b) it could break future OS X functionality.

  2. You can bypass the existing configuration file. This is the approach I took, partly due to the reasons listed above and partly because I found that pfctl (the program used to manage pf) wouldn’t activate the filter rules when the existing configuration file was used. (It complained about improper order of lines in the existing configuration file.)

Note that some tools (like IceFloor) take the first approach and modify the existing configuration file.

I’ll assume you’re going to use option #2. What you’ll need, then, are (at a minimum) two configuration files:

  1. The pf configuration file you want it to parse on startup
  2. At least one anchor file that contains the various options and rules you want to pass to pf when it starts

Since we’re bypassing the existing configuration file, all you really need is an extremely simple configuration file that points to your anchor and loads it, like this:

The other file you need has the actual options and rules that will be passed to pf when it starts. You can get fancy here and use a separate file to define macros and tables, or you can bundle the macros and tables in with the rules. Whatever approach you take, be sure that you have the commands in this file in the right order: options, normalization, queueing, translation, and filtering. Failure to put things in the right order will cause pf not to enable and will leave your system without this additional layer of network protection.

A very simple set of rules in an anchor might look something like this:

Naturally, you’d want to customize these rules to fit your environment. At the end of this article I provide some additional resources that might help with this task.

Once you have the configuration file in place and at least one anchor defined with rules (in the right order!), then you’re ready to move ahead with creating the launchd item for pf so that it starts automatically.

However, there is one additional thing you might want to do first—test your rules to be sure everything is correct. Use this command in a terminal window while running as an administrative user:

sudo pfctl -v -n -f <path to configuration file>

If this command reports errors, go back and fix them before proceeding.

Creating the launchd Item for pf

Creating the launchd item simply involves creating a properly-formatted XML file and placing it in /Library/LaunchDaemons. It must be owned by root, otherwise it won’t be processed at all. If you aren’t clear on how to make sure it’s owned by root, go do a bit of reading on sudo and chown.

Here’s a launchd item you might use for pf:

A few notes about this launchd item:

  • You’ll want to change the last <string> item under the ProgramArguments key to properly reflect the path and filename of the custom configuration file you created earlier. In my case, I’m storing both the configuration file and the anchor in the /etc/pf.anchors directory.
  • As I stated earlier, you must ensure this file is owned by root once you put it into /Library/LaunchDaemons. It won’t work otherwise.
  • If you have additional parameters you want/need to pass to pfctl, add them as separate lines in the ProgramArguments array. Each individual argument on the command line must be a separate item in the array.

Once this file is in place with the right ownership, you can either use launchctl to load it or restart your computer. The robust pf firewall should now be running on your OS X Mountain Lion system. Enjoy!

Some Additional Resources

Finally, it’s important to note that I found a few different web sites helpful during my experimentations with pf on OS X. This write-up was written with Lion in mind, but applies equally well to Mountain Lion, and this site—while clearly focused on OpenBSD and FreeBSD—was nevertheless quite helpful as well.

It should go without saying, but I’ll say it nevertheless: courteous comments are welcome! Feel free to add your thoughts, ideas, questions, or corrections below.

Tags: , , ,

As you might have noticed in recent blog posts, I’m spending a fair amount of time working with open source solutions like Ubuntu Linux, OpenBSD, Puppet, and similar. As part of the effort to make myself more familiar with these and other open source projects, I’ve decided to re-architect my home network using predominantly open source software.

Here are the open source software projects that I know for sure I’ll end up using:

  • Ubuntu Server 12.04 LTS
  • OpenBSD (probably version 5.1)
  • Squid and the Squidguard content filter
  • BIND v9
  • ISC DHCP server
  • Open source Puppet

However, there are a few packages that I haven’t quite settled for sure. I’d love to hear some feedback on these questions:

  1. What do you recommend for low-volume web serving—Apache HTTP or Nginx? (Manageability via Puppet is a consideration, too.)
  2. It looks as if I can use Heartbeat to provide high availability/failover at the application level for the web and web proxy services (this would be active/passive only). Anyone have any experience with Heartbeat, or some good resources to share?
  3. It would be great if I could actually do load balanced sessions for the web and web proxy services (active/active instead of active/passive). It appears as if LVS will do this, but it also looks like I’ll need separate VMs (everything will be virtualized) for LVS. Anyone have some resources for LVS?
  4. Are there any other projects or tools I should be considering?

Thanks for any help or information you can provide!

Tags: , , ,

As I mentioned in a previous post, the next iteration of my Puppet explorations involves the use of Hiera. Hiera, a project also managed by Puppet Labs, is described as “a simple pluggable hierarchical database.” In the Puppet world, what that means is we can use Hiera to store data values outside of the manifests, then look them up dynamically as the configurations are being compiled and applied to the nodes. In a future post, I’ll provide an example of how you could use Hiera in a multi-OS Puppet environment.

For now, though, I just want to talk about how to get Hiera up and running and working in a Puppet environment, as it wasn’t as straightforward as I expected it to be.

I’m using the same virtual environment for this post as I’ve used in previous posts. I have a Puppet master server running on an Ubuntu 12.04 LTS VM; this master server services client VMs running Ubuntu 12.04 LTS, CentOS 5.8, and OpenBSD 5.1. Keep in mind that if you are using a different distribution of Linux than what I’m using here, your specific directories and paths might be slightly different.

Here are the steps that I took to get Hiera up and running on the Puppet master server:

  1. First, I used gem install hiera hiera-puppet to install Hiera and Hiera-Puppet (the connecting code between Hiera and Puppet).

  2. Optionally, you can next run gem list to verify that Hiera and Hiera-Puppet are included in the list of installed Ruby gems.

  3. You’ll need to know exactly where the Hiera-Puppet files are found on your system, so run gem list -d to show the details of the installed gems. This will include the path where the files are found. On my Ubuntu 12.04 LTS Puppet master server, Hiera and Hiera-Puppet were installed to /var/lib/gems/1.8/gems (in their own directories, respectively).

  4. Next, you’ll need to know where Puppet stores its modules. Run puppet master --configprint modulepath to get the list of directories where Puppet modules are stored. On my Ubuntu 12.04 LTS Puppet master server, that included /etc/puppet/modules.

  5. Copy the Hiera-Puppet directory (found in step 3) to one of the Puppet module directories (found in step 4). This ensures that Puppet can actually leverage Hiera. Note that this Puppet Labs post provides a different process for accomplishing this; I found that their process didn’t work in my environment.

  6. Hiera will need a directory to store its configuration files; I used /etc/puppet/hieradata. I don’t think it matters where the directory is; just make a note of where.

  7. You’ll need to create a Hiera configuration file located in the main Puppet directory (on my Ubuntu 12.04 LTS Puppet master server, this was /etc/puppet). This file is called hiera.yaml and will probably need to look something like this (you’d specify the directory created in step 6 on the last line here):

  8. ---
      - %{operatingsystem}
      - common
      - yaml
      :datadir: '/etc/puppet/hieradata'
  9. In the directory you created for Hiera and specified in hiera.yaml, you’ll need to create the YAML files for your hierarchy. Since I’m using a hierarchy based on operating system, I created YAML files for OpenBSD, CentOS, Ubuntu, etc. One note: the file names must match the operating system name returned by Facter exactly, including case (i.e., OpenBSD.yaml instead of openbsd.yaml).

At this point, you should now have a working Hiera installation. In a future post, I’ll show you how I used Hiera with Puppet to create OS-based customized configuration files.

In working to get Hiera up and running, I found the following websites and pages to be helpful:

First Look: Installing and Using Hiera (part 1 of 2)
Puppet configuration variables and Hiera

Corrections, suggestions for improvement, or questions are always welcome! Speak up in the comments below.

Tags: , , ,

I received some great feedback on my post about using Puppet with multiple operating systems. One of the suggestions was to do a better job of following the “official” Puppet style guide for syntax and file layout. With that in mind, I installed puppet-lint on my Puppet master server using apt-get install rubygems followed by gem install puppet-lint.

Using puppet-lint, I was able to correct all “errors” in the manifest, but was left with one warning regarding line length. This is, as far as I know, an uncorrectable warning, as the line that puppet-lint finds is the line that specifies the source of the OpenBSD packages. I can’t shorten the URL to the packages because that’s not under my control.

In any case, here’s the corrected Puppet manifests. I checked these with both puppet parser validate and puppet-lint, but if anyone has anything I’ve missed feel free to point it out in the comments. Unless otherwise stated, all files are placed in the modules/ntp/manifests directory under the Puppet directory (which on my Puppet master server is /etc/puppet).


# NTP class definition

class ntp {
  include "ntp::$::operatingsystem"

  file { 'ntp.conf':
    ensure        => present,
    path          => '/etc/ntp.conf',
    owner         => 'root',
    group         => 'root',
    mode          => '0644',
    source        => 'puppet:///modules/ntp/ntp.conf',
    require       => Package['ntp'],

  package { 'ntp':
    ensure        => installed,

  service { 'ntp':
    ensure        => running,
    subscribe     => File['ntp.conf'],
    require       => File['ntp.conf'],


# NTP subclass for OpenBSD

class ntp::openbsd inherits ntp {
  File['ntp.conf'] {
    path          => '/etc/ntpd.conf',
    group         => 'wheel',
    source        => "puppet:///modules/ntp/ntpd.conf.$::operatingsystem",

  Package['ntp'] {
    source        => '',

  Service['ntp'] {
    provider      => 'base',
    hasstatus     => false,
    start         => '/usr/sbin/ntpd',


# NTP subclass for Ubuntu Linux

class ntp::ubuntu inherits ntp {
  Service['ntp'] {
    provider      => 'init',
    path          => '/etc/init.d/',


# NTP subclass for CentOS Linux

class ntp::centos inherits ntp {
  Service['ntp'] {
    name          => 'ntpd',

I know that there are NTP modules for Puppet that “take care” of all this sort of thing for you, but creating this was, for me, part of the learning process. I’m going to tackle SSH next, and—per the comments on my first Puppet post—I’ve also started reading up on Hiera to see how that might fit in here. The initial reading I’ve done leads me to believe that the combination of Puppet and Hiera could be quite powerful.

As always, courteous comments are welcome. Feel free to speak up below!

Tags: , , , , ,

Over the last few days, I’ve been working with the open source edition of Puppet, the configuration management/automation tool. I’ve learned quite a bit, but I still have a long, long way to go. What I wanted to share in this post is what I learned in using Puppet with clients using different operating systems (OSes). If you are a Puppet expert, I’d love to hear any tips and tricks you might have to help me improve.

For now, I won’t go into the all the details involved in setting up a Puppet master and Puppet client systems, as that process is reasonably well covered elsewhere. (Although not directly related to Puppet specifically, Jonas Rosland has some instructions for installing Puppet on Ubuntu Server 12.04 LTS here.) If you’re not already familiar with getting Puppet up and running—in a very basic configuration, at least—then have a look at one of the many tutorials that are available.

For my testing, I used the following environment:

  • I chose to use Ubuntu Server 12.04 LTS for my Puppet master server. It was assigned a static IP address.
  • I used client systems running OpenBSD 5.1, Ubuntu Server 12.04 LTS, and CentOS 5.8. These systems were assigned dynamic IP addresses via DHCP.
  • Both the Puppet master server as well as the clients were VMs running under VMware Fusion 4.1.2 on Mac OS X 10.6.8.
  • I already had a solid DNS infrastructure in place (using BIND on OpenBSD as the master with a slave running Mac OS X Server 10.6.8), so I leveraged that for my test environment.

For the purposes of this first attempt at using Puppet, I decided I would try to automate the configuration of NTP on the client systems. Before starting the testing, I took care of the SSL certificates (using puppet agent --test on the client side to perform an initial connection to the Puppet master server, then puppet cert sign on the Puppet master server to sign the client certificates). This accomplished two things:

  1. It verified that DNS resolution was working and that the clients could communicate with the Puppet master server.
  2. It took care of getting the SSL certificates signed and in place.

Now that all the preliminaries are out of the way, let’s get into my specific configuration, and what I want to share with you. If you’ve looked at Puppet at all, you’ll recall that the “trifecta,” so to speak, of Puppet manifests (a manifest is Puppet’s declarative configuration file) is the use of the file-package-service combination of resources, like this:

Obviously, this is just an example; you’d want/need to customize the values specified above for the various resources for your particular installation. The point is that it’s very common (as I understand it) to have this file-package-service combination in Puppet manifests.

This is all well and good, but what about when you need to manage multiple OSes with Puppet? What do you do then? Consider, for example, the OSes involved in my environment: OpenBSD, Ubuntu, CentOS. I know for certain that OpenBSD’s implementation of NTP uses a different configuration file than Ubuntu’s NTP package. How would one handle that?

Dominic Cleal’s blog gave me the kickstart I needed to get started down the path to a multi-OS manifest. At first I experimented with conditionals in the manifest, like this:

That worked fine for the file definition, but it broke down when I got to the package definition. Why? OpenBSD packages require that you define a source, but other platforms don’t necessarily need (or want) a source defined.

After futzing around with various conditionals, I finally read the rest of Dominic’s post, which suggested the use of subclasses for various operating systems. Now that I had read through pages and pages of Puppet manuals and visited site after site trying to figure out how to make this work, the idea of subclasses made sense. So I created an ntp module (or class) with subclasses named ntp::common and ntp::openbsd, using the syntax Dominic shared.

Sadly, it didn’t work. When the Puppet agent tried to apply the configuration, it complained that the “ntp.conf” file resource had already been defined once in the ntp::common section and couldn’t be defined again in the ntp::openbsd section.

Never one to give up so easily (and thanks for encouraging words on Twitter, Cody—”Simple is for the weak. You’ve got this!”), I turned to the #puppet IRC channel, where I was enlightened as to my mistake—my syntax was off. Instead of using the syntax to define a resource in the subclasses, I needed to use the syntax to refer back to an already-defined class.

Let’s assume I used this sort of definition in ntp::common:

If that’s the case, then this is the syntax I needed in the ntp::openbsd subclass:

See what that does? By using the File["ntp.conf"] syntax instead of the file { "ntp.conf" } syntax, I simply referred back to an existing resource and overrode the previously-defined values. This allowed me to create OS-specific subclasses with OS-specific settings. In turn, that allows me to manage different OSes using a single module with subclasses. If I ever need to add a new OS to be managed, I can simply add another subclass with the OS-specific settings and I’m off to the races.

Cool, huh? (And not too terribly shabby for a beginner, I think!)

If you’re interested, you can get the full manifest (or module) that I created here. You’ll note that I’m using the Puppet file serving mechanism, which I understand is not “best practices” any longer; I’ll probably update it soon to the new module-based mechanism. Puppet experts are encouraged to share any ideas/tips/tricks that might further improve this method, and everyone is welcome to share their (courteous) comments. Enjoy!

Tags: , , , ,

Using multiple layers of security has long been recognized as a useful strategy in hardening your computers against attack or exploit. In this post, I want to explain how to set up and configure the BSD-level ipfw firewall that is present in Mac OS X. While ipfw is certainly not a security panacea, it can be a solid part of a broader security strategy.

Setting up ipfw on Mac OS X has three basic steps:

  1. Create a shell script that launches ipfw.
  2. Create a configuration file that the shell script from step 1 uses when launching ipfw.
  3. Create a LaunchDaemon in Mac OS X that calls the shell script from step 1 to start and configure ipfw every time your Mac boots.

Let’s take a deeper look at each of these steps.

Create a Startup Shell Script

This part is harder than it sounds. At its most basic level, the script only needs to call /sbin/ipfw and a configuration file, like this:


/sbin/ipfw -q /etc/ipfw.conf

I did quite a bit of digging to see if something more than that was suggested, and finally came up with this startup shell script:

# Startup script for ipfw on Mac OS X

# Flush existing rules
/sbin/ipfw -f -q flush

# Silently drop unsolicited connections
/usr/sbin/sysctl -w net.inet.tcp.blackhole=2
/usr/sbin/sysctl -w net.inet.udp.blackhole=1

# Load the firewall ruleset
/sbin/ipfw -q /etc/ipfw.conf

This startup shell script can generally be put anywhere; I chose to put it in /usr/local/bin (which may not exist by default on your system).

With the startup shell script in place, you’re now ready to proceed to the next step, which is perhaps the most involved and detailed step in the process.

Create the Configuration File

The configuration file contains all of the firewall rule definitions for ipfw and is therefore one of the most complicated steps. This complexity is not because the configuration file itself is difficult, but rather because the rules that should be included will vary greatly from user to user and network to network. I strongly encourage you to do your own research to understand what sort of firewall rules are most appropriate in your environment and for your setup.

You can (theoretically) place the configuration file anywhere; I chose to place the file in /etc as ipfw.conf (very original, I know). If you do use something other than /etc/ipfw.conf, then adjust the startup shell script accordingly.

Rather than provide any sort of suggested firewall ruleset here, let me suggest some other sites that provide excellent information on suggested rulesets for ipfw:

Use a custom firewall in 10.5 with ipfw (Mac OS X Hints)

Setting up firewall rules on Mac OS X

Configuring IPFW Firewalls on OS X

From those sites—and there are many others besides just those—you should be able to put together an ipfw ruleset that is right for your network and your environment. Once you have the configuration file created and in place, then you’re ready for the final step: ensuring that ipfw launches automatically when you boot your Mac.

Create a LaunchDaemon

The final step is ensuring that ipfw launches automatically every time your Mac boots. This is accomplished by creating a text file—known as a property list file, or a plist file—with very specific contents into the /Library/LaunchDaemons folder.

Here’s a screenshot of my plist file, named (you can use a different name, like your own domain name, in the filename):

Don’t just copy and paste this file “as is” into your system! You’ll need to customize it to fit your system. Specifically, under the ProgramArguments key, the path to and name of the startup shell script should be adjusted to match the shell script you created earlier. In my case, the script is named and is found in /usr/local/bin. This startup shell script should, in turn, refer to the ipfw configuration file you created.

I believe—but I could be mistaken—that you’ll need to set ownership of the LaunchDaemon plist file to root:wheel. You can do this using the chown command in Terminal.

Once the plist file is in place, reboot your Mac. Once your Mac boots up and you’ve logged in, fire up the Terminal and run this command (you will need to use sudo if your account has administrative privileges; if your account doesn’t have administrative privileges, you should log in as an account that does in order to test things):

sudo ipfw list

This command should return the list of firewall rules you embedded in the configuration file. If it doesn’t, then go back and double-check your setup. Be sure that the plist file has the correct reference to the startup shell script, and that the startup shell script has the correct reference to the configuration file. You should also check to ensure that you made the startup shell script executable (using the chmod command).

If the command does return your firewall ruleset, then you’re all set.

Note that using ipfw does not in any way prevent you from using other firewalls—such as the built-in application-level firewall in Mac OS X—to further secure your system.

Questions? Comments? Clarifications? Please feel free to speak up in the comments below to add your thoughts.

Tags: , , ,

Let’s get right to the point and set the record straight: I am not, nor have I ever been, affiliated with or employed by the FBI or any other government agency.

That’s why I was surprised when word surfaced that I had been implicated in some sort of conspiracy regarding a plan to place secret backdoors into an OpenBSD cryptographic framework, and that my recent advocacy of OpenBSD was based on my alleged involvement with the FBI.

I don’t know where the person who started this rumor got his information, but he is sadly mistaken regarding my involvement. Perhaps the other Scott Lowe is involved; I don’t know. What I do know is this: I’m not affiliated with, supported by, employed by, associated with, or in support of the FBI in any way, shape, form, or fashion. Quite simply, it wasn’t me.

Feel free to post any additional questions or courteous comments below. I’ll answer all relevant questions openly and honestly.

Tags: ,

This is one of those posts that is as much for my own benefit as it is for others. For a few weeks now, I’ve been working on a dynamic DNS setup for my home/home office network involving BIND and the ISC DHCP daemon running on a pair of OpenBSD virtual machines. I finally got it to work (thanks in no small part to this article and this how-to post) and then found that I needed to make some manual edits to the DNS zones.

After a great deal of stumbling and fumbling, I found an obscure reference to a need to use rndc when making manual edits. After some testing, I learned that the “correct” way to make manual edits is as follows:

  1. Halt changes to the dynamic DNS zone with the command rndc freeze <zone name>.
  2. Make the manual edits to the zone file, being sure to increment the zone serial number.
  3. Use the command named-checkzone <zone name> <zone file> to verify the syntax in the zone file.
  4. Allow changes to the dynamic DNS zone with the command rndc thaw <zone name>.

If you monitor the appropriate log files (on my system I had to monitor /var/log/daemon), you’ll see zone transfers take place to any secondary name servers, a strong indicator that the change has successfully been accepted and propagated.

A very simple task, I know, but hopefully this post will help me next time I need to do this same task again and hopefully it will help someone else out there in the same situation.

Tags: , ,

If you were following my tweets over the last few days, you probably already know that I have been working on setting up a CCNA study environment using Ubuntu Linux, GNS3, and VMware Workstation. After a couple days of difficulties, I finally managed to make it work last night. Here are the steps that I took to make it work.

Before we start, there is the standard disclaimer: these are the steps that worked for me; these steps might or might not work for you, and are almost guaranteed not to work with different Linux distributions or different versions of the associated software.

Here are the software components and versions that I am using in my environment:

  • Ubuntu Linux 8.04.4 LTS, 32 bit
  • GNS3 0.6.1
  • Dynamips 0.2.8-RC2
  • Dynagen
  • VMware Workstation 7.0.1 for Linux, 32 bit

I won’t go into great detail on setting up Ubuntu Linux as there are plenty of resources available for that portion of this environment. You will need to be at least vaguely familiar with the Linux command-line interface (CLI) and basic Linux commands, or you’ll find this process a bit difficult.

Once you have Ubuntu Linux installed and configured appropriately, the first step is to go ahead and install some dependencies using apt-get:

sudo apt-get install dynagen python-qt4

This should download and install both Dynagen and the Python-QT4 libraries. Next, you’ll need to download and install GNS3 0.6.1. There are newer versions of GNS3 available, but earlier attempts to get this environment running with the newer version of GNS3 resulted in problems. Again, your results might differ. Version 0.6.1 of GNS3 is available from the GNS3 SourceForge site.

Once you have GNS3 downloaded, extract it into the directory of your choice (I chose to use /opt/GNS3).

After you’ve downloaded and extracted GNS3, create the following directories under the directory where GNS3 is found:

<GNS3 directory>/project
<GNS3 directory>/ios
<GNS3 directory>/cache
<GNS3 directory>/tmp
<GNS3 directory>/dynamips

Use the chmod and chown commands as necessary to ensure that your user account has full read/write permissions all of these directories except the dynamips directory.

Download a copy of Dynamips (it’s generally available here), put it into the dynamips directory you created, and use the chmod command to make it executable. I also found it necessary to set the Dynamips binary’s SUID bit so that it would always run as root; I know this is not best practice but I could not find any other workaround. (Without setting it SUID, GNS3 would always report an error when trying to launch Dynamips.)

Now launch GNS3 and use the Preferences in the application to set the correct path to your project directory (<GNS3 directory>/project) and the IOS/PIX directory (<GNS3 directory>/ios), the correct path to the Dynamips binary (<GNS3 directory>/dynamips), the correct path to the working directory (<GNS3 directory>/tmp), and the working directory for capture files (set it to your project directory).

At this point you should have a working GNS3 installation. You’ll still need to locate IOS images to use; once you have valid IOS images, place them in the ios directory you created earlier and configure them within GNS3 as needed. You should then be able to create a router instance, boot it, and access the router console from within GNS3.

You could stop there and have a pretty cool environment, but I wanted to go a step further. I also installed VMware Workstation 7.0.1 (I won’t go into detail here, it’s a pretty simple process) and then used the Virtual Network Editor to create some additional host-only networks (in addition to the default vmnet1). Again, this is well-documented already, so I won’t discuss the process in any length. Where it gets interesting is in how you connect GNS3 and these host-only networks so that VMs can be incorporated into your GNS3 router topology.

Here’s how you connect GNS3 and the VMware Workstation host-only networks:

  1. In GNS3, add a cloud object to the topology.
  2. Right-click the cloud object and select Configure.
  3. On the NIO Ethernet tab in the Generic Ethernet NIO section, select one of the host-only networks (like vmnet1) and click Add. This creates a link between the cloud object and the selected host-only network.

At this point, you can attach a VM to the selected host-only network, attach a router to the cloud, and be able to pass traffic from the VM to the router. Pretty cool, huh?

What I’ve done so far is create a simple network with two VMs attached to two different host-only networks which are in turn connected to two different cloud objects and two different routers. Then I created a “serial WAN link” between the two routers (GNS3 won’t, as far as I can tell, actually simulate WAN links with bandwidth limits and latency) and configured everything so that I could pass traffic from one VM to the second VM across the “virtual WAN”. The plan is to increase the network complexity—as much as my poor little Dell laptop will allow given its limited CPU and RAM—and work through the various CCNA study guides in preparation for my exam.

One other quick note about this setup (and the reason why I chose Linux as my host platform): by setting up SSH on the Linux system (with a simple sudo apt-get install openssh-server), I can now SSH into the Linux host system and then use Telnet from there to access all the various routers. In addition, because I’m using OpenBSD as the guest OS on my VMs, I can also SSH from the Linux host to the OpenBSD VMs (assuming my GNS3 network is configured correctly). I’m also thinking that there’s a way I can leverage some VNC connectivity through Workstation to access the VMs as well, but I’ll need to research that a bit to see how it works.

I would be remiss if I did not point out a couple of sites that were extremely helpful in getting this setup up and running. First, this site provided an excellent overview of the GNS3 installation on Ubuntu. Although the walk-through was for a newer version of Ubuntu, the instructions worked perfectly on 8.04.4 LTS. Second, this site gave me the “missing link” on how to connect GNS3 and VMware Workstation’s host-only networks so that you could mix the two environments. Thank you to both sites for outstanding information!

If you are a GNS3 expert or have some additional tips or tricks to share, please add them in the comments below so that all readers can benefit. Courteous comments are always welcome.

Tags: , , , , , , ,

« Older entries