UNIX

This category contains information related to UNIX and UNIX-related operating systems, such as OpenBSD, NetBSD, FreeBSD, or Sun Solaris.

In this post, I’m going to show you a workaround to running Synergy on OS X Mavericks. If you visit the official Synergy page, you’ll note that the site indicates that full Mavericks support is still pending. However, if you’re willing to “get your hands dirty,” you can run Synergy on OS X Mavericks right now.

If you’re unfamiliar with Synergy, read this write-up (from 2 years ago) on how I use Synergy in my home office setup. The basic gist behind Synergy is that one computer will run the Synergy server; other computers will run the Synergy client and connect to the Synergy server. You’ll be able to use the keyboard and mouse attached to the Synergy server to control the Synergy clients.

Here’s how to get Synergy support running on OS X Mavericks now:

  1. Download the latest 10.8 Synergy build from the website. (I didn’t include a link here because the link changes as the version changes, so the link would become stale rather quickly.) This downloads as a .DMG file to your computer.
  2. Double-click the .DMG to open and mount it on your desktop. Inside the .DMG, you’ll see the Synergy app icon.
  3. Right-click (or Ctrl-click) on the Synergy app and select “Show Package Contents.”
  4. Double-click on Contents, then MacOS.
  5. In the MacOS file, copy the synergys and synergyc files to a different location. It doesn’t really matter where, just make note of the location.
  6. Close all the window and eject (unmount) the downloaded .DMG file.

For your Synergy server, you’ll need an appropriate configuration file. You can check my previously-mentioned Synergy post for an example configuration file, or you can peruse the official wiki. Either way, create an appropriate configuration file, and make note of its name and location.

When you’re ready, just launch the Synergy server from the OS X Terminal, like this (I’m assuming that synergys and its configuration file—creatively named synergy.conf—are stored in your home directory):

~/synergys -c ~/synergy.conf

Using whatever method you prefer, copy the previously-extracted synergyc file to your Synergy client(s). As before, it doesn’t really matter too much where you put the file, just make a note of the location. Then, using the OS X Terminal, run this (as before, I’m assuming synergyc is in your home directory):

~/synergyc <Name of Synergy server>

That’s it! You should now be able to use the keyboard and mouse on the Synergy server to control the Synergy client. I can verify that current builds of the Synergy client (synergyc) work just fine on OS X Mavericks, and I would imagine that the Synergy server would work fine as well (I just haven’t had time to test it). If anyone has tested it and would like to provide feedback in the comments, I’m sure other readers would appreciate it.

Enjoy! (By the way, if you do find Synergy to be useful, I’d recommend donating to the project.)

Tags: , ,

I’ve written before about adding an extra layer of network security to your Macintosh by leveraging the BSD-level ipfw firewall, in addition to the standard GUI firewall and additional third-party firewalls (like Little Snitch). In OS X Lion and OS X Mountain Lion, though, ipfw was deprecated in favor of pf, the powerful packet filter that I believe originated on OpenBSD. (OS X’s version of pf is ported from FreeBSD.) In this article, I’m going to show you how to use pf on OS X.

Note that this is just one way of leveraging pf, not necessarily the only way of doing it. I tested (and am currently using) this configuration on OS X Mountain Lion 10.8.3.

There are X basic pieces involved in getting pf up and running on OS X Mountain Lion:

  1. Putting pf configuration files in place.
  2. Creating a launchd item for pf.

Let’s look at each of these pieces in a bit more detail. We’ll start with the configuration files.

Putting Configuration Files in Place

OS X Mountain Lion comes with a barebones /etc/pf.conf preinstalled. This barebones configuration file references a single anchor, found in /etc/pf.anchors/com.apple. This anchor, however, does not contain any actual pf rules; instead, it appears to be nothing more than a placeholder.

Since there is a configuration file already in place, you have two options ahead of you:

  1. You can overwrite the existing configuration file. The drawback of this approach is that a) Apple has been known to change this file during system updates, undoing your changes; and b) it could break future OS X functionality.

  2. You can bypass the existing configuration file. This is the approach I took, partly due to the reasons listed above and partly because I found that pfctl (the program used to manage pf) wouldn’t activate the filter rules when the existing configuration file was used. (It complained about improper order of lines in the existing configuration file.)

Note that some tools (like IceFloor) take the first approach and modify the existing configuration file.

I’ll assume you’re going to use option #2. What you’ll need, then, are (at a minimum) two configuration files:

  1. The pf configuration file you want it to parse on startup
  2. At least one anchor file that contains the various options and rules you want to pass to pf when it starts

Since we’re bypassing the existing configuration file, all you really need is an extremely simple configuration file that points to your anchor and loads it, like this:

The other file you need has the actual options and rules that will be passed to pf when it starts. You can get fancy here and use a separate file to define macros and tables, or you can bundle the macros and tables in with the rules. Whatever approach you take, be sure that you have the commands in this file in the right order: options, normalization, queueing, translation, and filtering. Failure to put things in the right order will cause pf not to enable and will leave your system without this additional layer of network protection.

A very simple set of rules in an anchor might look something like this:

Naturally, you’d want to customize these rules to fit your environment. At the end of this article I provide some additional resources that might help with this task.

Once you have the configuration file in place and at least one anchor defined with rules (in the right order!), then you’re ready to move ahead with creating the launchd item for pf so that it starts automatically.

However, there is one additional thing you might want to do first—test your rules to be sure everything is correct. Use this command in a terminal window while running as an administrative user:

sudo pfctl -v -n -f <path to configuration file>

If this command reports errors, go back and fix them before proceeding.

Creating the launchd Item for pf

Creating the launchd item simply involves creating a properly-formatted XML file and placing it in /Library/LaunchDaemons. It must be owned by root, otherwise it won’t be processed at all. If you aren’t clear on how to make sure it’s owned by root, go do a bit of reading on sudo and chown.

Here’s a launchd item you might use for pf:

A few notes about this launchd item:

  • You’ll want to change the last <string> item under the ProgramArguments key to properly reflect the path and filename of the custom configuration file you created earlier. In my case, I’m storing both the configuration file and the anchor in the /etc/pf.anchors directory.
  • As I stated earlier, you must ensure this file is owned by root once you put it into /Library/LaunchDaemons. It won’t work otherwise.
  • If you have additional parameters you want/need to pass to pfctl, add them as separate lines in the ProgramArguments array. Each individual argument on the command line must be a separate item in the array.

Once this file is in place with the right ownership, you can either use launchctl to load it or restart your computer. The robust pf firewall should now be running on your OS X Mountain Lion system. Enjoy!

Some Additional Resources

Finally, it’s important to note that I found a few different web sites helpful during my experimentations with pf on OS X. This write-up was written with Lion in mind, but applies equally well to Mountain Lion, and this site—while clearly focused on OpenBSD and FreeBSD—was nevertheless quite helpful as well.

It should go without saying, but I’ll say it nevertheless: courteous comments are welcome! Feel free to add your thoughts, ideas, questions, or corrections below.

Tags: , , ,

Welcome to Technology Short Take #26! As you might already know, the Technology Short Takes are my irregularly-published collections of links, articles, thoughts, and (sometimes) rants. I hope you find something useful here!

Networking

  • Chris Colotti, as part of a changed focus in his role at VMware, has been working extensively with Nicira NVP. He’s had a couple of good posts; this one is a primer on how NVP works, and this one discusses the use of the Open vSwitch (OVS) vApp. As I mentioned before in other posts, OVS is popping up in more and more places—it might be a good idea to make sure you’re familiar with it.
  • This article by Ivan Pepelnjak on VXLAN termination on physical devices is over a year old, but still very applicable—especially considering Arista Networks recently announced their 7150S switch, which sports hardware VTEP (VXLAN Tunnel End Point) support (meaning that it can terminate VXLAN segments).
  • Brad Hedlund dives into Midokura Midonet in this post on L2-L4 network virtualization. It’s a good overview (thanks Brad!) and worth reading if you want to get up to speed on what Midokura is doing. (Oh, just as an aside: note that Midokura leverages OVS in their solution. Just saying…)
  • This blog post provides more useful information from Kamau Wanguhu on VXLAN and proxy ARP. Kamau also has an interesting post on network virtualization, although—to be honest—the post is long on messaging/positioning and short on technical information. I prefer the latter instead of the former.

Servers/Hardware

  • This mention of the Dell PowerEdge M I/O Aggregator looks interesting, although I’m still not real clear on exactly what it is or how it works. I guess this first article was a tease?

Security

Nothing this time around, but I’ll stay alert for items to include in future posts!

Cloud Computing/Cloud Management

  • Want to know a bit more about how to configure VXLAN inside VCD? Rawlinson Rivera has a nice write-up that is worth reviewing.
  • Clint Kitson, an EMC vSpecialist, talks about some VCD integrity scripts he created. Looks like some pretty cool stuff—great work, Clint!
  • For the past couple of weeks I’ve been (slowly) reading Kevin Jackson’s OpenStack Cloud Computing Cookbook; it’s very useful. It’s worth a read if you want to get up to speed on OpenStack; naturally, you can get it from Amazon.

Operating Systems/Applications

  • At the intersection of cloud-based storage and configuration management, I happened to find this very interesting Puppet module designed to fetch and update files from an S3 bucket. Through this module, you could store files in S3 instead of using Puppet’s built-in file server. (By the way, this module also works with OpenStack Swift as well.)
  • One of the things I’ve complained about regarding newer versions of OS X is the “hiding” of the Unix underpinnings. Perhaps I should read this book and see if my thinking is unfounded?

Storage

  • Chris Evans takes a look at Hyper-V 3.0′s Virtual Fibre Channel feature in this write-up. From what I’ve read, it sounds like Hyper-V’s NPIV implementation is more robust than VMware’s broken and busted NPIV implementation. (If you don’t know why I say that about VMware’s implementation, ask anyone who’s tried to use it.) The real question is this: is NPIV support in a hypervisor of any value any longer?
  • Gina Minks (formerly of Dell, now with Inktank) recommended I have a look at Ceph and mentioned this post on migrating to Ceph (with a little libvirt thrown in).
  • Gluster might be another project that I need to spend some time examining; this post on using Gluster with oVirt 3.1 looks interesting. Anyone have any pointers for a Gluster beginner?
  • Mirantis has a post about some Nova Volume integration with Isilon. I’ve often said that I think scale-out platforms like Isilon (among others) are an important foundation for future storage solutions. It’s cool to see some third-party development happening to integrate Isilon and OpenStack.

Virtualization

That’s all for this time around. As always, courteous comments are welcome (encouraged, in fact!), so feel free to speak up in the comments below. I’d love to hear your feedback.

Tags: , , , , , , , , , , , , , ,

As you might have noticed in recent blog posts, I’m spending a fair amount of time working with open source solutions like Ubuntu Linux, OpenBSD, Puppet, and similar. As part of the effort to make myself more familiar with these and other open source projects, I’ve decided to re-architect my home network using predominantly open source software.

Here are the open source software projects that I know for sure I’ll end up using:

  • Ubuntu Server 12.04 LTS
  • OpenBSD (probably version 5.1)
  • Squid and the Squidguard content filter
  • BIND v9
  • ISC DHCP server
  • Open source Puppet

However, there are a few packages that I haven’t quite settled for sure. I’d love to hear some feedback on these questions:

  1. What do you recommend for low-volume web serving—Apache HTTP or Nginx? (Manageability via Puppet is a consideration, too.)
  2. It looks as if I can use Heartbeat to provide high availability/failover at the application level for the web and web proxy services (this would be active/passive only). Anyone have any experience with Heartbeat, or some good resources to share?
  3. It would be great if I could actually do load balanced sessions for the web and web proxy services (active/active instead of active/passive). It appears as if LVS will do this, but it also looks like I’ll need separate VMs (everything will be virtualized) for LVS. Anyone have some resources for LVS?
  4. Are there any other projects or tools I should be considering?

Thanks for any help or information you can provide!

Tags: , , ,

As I mentioned in a previous post, the next iteration of my Puppet explorations involves the use of Hiera. Hiera, a project also managed by Puppet Labs, is described as “a simple pluggable hierarchical database.” In the Puppet world, what that means is we can use Hiera to store data values outside of the manifests, then look them up dynamically as the configurations are being compiled and applied to the nodes. In a future post, I’ll provide an example of how you could use Hiera in a multi-OS Puppet environment.

For now, though, I just want to talk about how to get Hiera up and running and working in a Puppet environment, as it wasn’t as straightforward as I expected it to be.

I’m using the same virtual environment for this post as I’ve used in previous posts. I have a Puppet master server running on an Ubuntu 12.04 LTS VM; this master server services client VMs running Ubuntu 12.04 LTS, CentOS 5.8, and OpenBSD 5.1. Keep in mind that if you are using a different distribution of Linux than what I’m using here, your specific directories and paths might be slightly different.

Here are the steps that I took to get Hiera up and running on the Puppet master server:

  1. First, I used gem install hiera hiera-puppet to install Hiera and Hiera-Puppet (the connecting code between Hiera and Puppet).

  2. Optionally, you can next run gem list to verify that Hiera and Hiera-Puppet are included in the list of installed Ruby gems.

  3. You’ll need to know exactly where the Hiera-Puppet files are found on your system, so run gem list -d to show the details of the installed gems. This will include the path where the files are found. On my Ubuntu 12.04 LTS Puppet master server, Hiera and Hiera-Puppet were installed to /var/lib/gems/1.8/gems (in their own directories, respectively).

  4. Next, you’ll need to know where Puppet stores its modules. Run puppet master --configprint modulepath to get the list of directories where Puppet modules are stored. On my Ubuntu 12.04 LTS Puppet master server, that included /etc/puppet/modules.

  5. Copy the Hiera-Puppet directory (found in step 3) to one of the Puppet module directories (found in step 4). This ensures that Puppet can actually leverage Hiera. Note that this Puppet Labs post provides a different process for accomplishing this; I found that their process didn’t work in my environment.

  6. Hiera will need a directory to store its configuration files; I used /etc/puppet/hieradata. I don’t think it matters where the directory is; just make a note of where.

  7. You’ll need to create a Hiera configuration file located in the main Puppet directory (on my Ubuntu 12.04 LTS Puppet master server, this was /etc/puppet). This file is called hiera.yaml and will probably need to look something like this (you’d specify the directory created in step 6 on the last line here):

  8. ---
    :hierarchy:
      - %{operatingsystem}
      - common
    :backends:
      - yaml
    :yaml:
      :datadir: '/etc/puppet/hieradata'
  9. In the directory you created for Hiera and specified in hiera.yaml, you’ll need to create the YAML files for your hierarchy. Since I’m using a hierarchy based on operating system, I created YAML files for OpenBSD, CentOS, Ubuntu, etc. One note: the file names must match the operating system name returned by Facter exactly, including case (i.e., OpenBSD.yaml instead of openbsd.yaml).

At this point, you should now have a working Hiera installation. In a future post, I’ll show you how I used Hiera with Puppet to create OS-based customized configuration files.

In working to get Hiera up and running, I found the following websites and pages to be helpful:

First Look: Installing and Using Hiera (part 1 of 2)
Puppet configuration variables and Hiera

Corrections, suggestions for improvement, or questions are always welcome! Speak up in the comments below.

Tags: , , ,

I received some great feedback on my post about using Puppet with multiple operating systems. One of the suggestions was to do a better job of following the “official” Puppet style guide for syntax and file layout. With that in mind, I installed puppet-lint on my Puppet master server using apt-get install rubygems followed by gem install puppet-lint.

Using puppet-lint, I was able to correct all “errors” in the manifest, but was left with one warning regarding line length. This is, as far as I know, an uncorrectable warning, as the line that puppet-lint finds is the line that specifies the source of the OpenBSD packages. I can’t shorten the URL to the packages because that’s not under my control.

In any case, here’s the corrected Puppet manifests. I checked these with both puppet parser validate and puppet-lint, but if anyone has anything I’ve missed feel free to point it out in the comments. Unless otherwise stated, all files are placed in the modules/ntp/manifests directory under the Puppet directory (which on my Puppet master server is /etc/puppet).

init.pp

# NTP class definition

class ntp {
  include "ntp::$::operatingsystem"

  file { 'ntp.conf':
    ensure        => present,
    path          => '/etc/ntp.conf',
    owner         => 'root',
    group         => 'root',
    mode          => '0644',
    source        => 'puppet:///modules/ntp/ntp.conf',
    require       => Package['ntp'],
  }

  package { 'ntp':
    ensure        => installed,
  }

  service { 'ntp':
    ensure        => running,
    subscribe     => File['ntp.conf'],
    require       => File['ntp.conf'],
  }
}

openbsd.pp

# NTP subclass for OpenBSD

class ntp::openbsd inherits ntp {
  File['ntp.conf'] {
    path          => '/etc/ntpd.conf',
    group         => 'wheel',
    source        => "puppet:///modules/ntp/ntpd.conf.$::operatingsystem",
  }

  Package['ntp'] {
    source        => 'http://openbsd.mirrorcatalogs.com/pub/OpenBSD/5.1/packages/i386/',
  }

  Service['ntp'] {
    provider      => 'base',
    hasstatus     => false,
    start         => '/usr/sbin/ntpd',
  }
}

ubuntu.pp

# NTP subclass for Ubuntu Linux

class ntp::ubuntu inherits ntp {
  Service['ntp'] {
    provider      => 'init',
    path          => '/etc/init.d/',
  }
}

centos.pp

# NTP subclass for CentOS Linux

class ntp::centos inherits ntp {
  Service['ntp'] {
    name          => 'ntpd',
  }
}

I know that there are NTP modules for Puppet that “take care” of all this sort of thing for you, but creating this was, for me, part of the learning process. I’m going to tackle SSH next, and—per the comments on my first Puppet post—I’ve also started reading up on Hiera to see how that might fit in here. The initial reading I’ve done leads me to believe that the combination of Puppet and Hiera could be quite powerful.

As always, courteous comments are welcome. Feel free to speak up below!

Tags: , , , , ,

Over the last few days, I’ve been working with the open source edition of Puppet, the configuration management/automation tool. I’ve learned quite a bit, but I still have a long, long way to go. What I wanted to share in this post is what I learned in using Puppet with clients using different operating systems (OSes). If you are a Puppet expert, I’d love to hear any tips and tricks you might have to help me improve.

For now, I won’t go into the all the details involved in setting up a Puppet master and Puppet client systems, as that process is reasonably well covered elsewhere. (Although not directly related to Puppet specifically, Jonas Rosland has some instructions for installing Puppet on Ubuntu Server 12.04 LTS here.) If you’re not already familiar with getting Puppet up and running—in a very basic configuration, at least—then have a look at one of the many tutorials that are available.

For my testing, I used the following environment:

  • I chose to use Ubuntu Server 12.04 LTS for my Puppet master server. It was assigned a static IP address.
  • I used client systems running OpenBSD 5.1, Ubuntu Server 12.04 LTS, and CentOS 5.8. These systems were assigned dynamic IP addresses via DHCP.
  • Both the Puppet master server as well as the clients were VMs running under VMware Fusion 4.1.2 on Mac OS X 10.6.8.
  • I already had a solid DNS infrastructure in place (using BIND on OpenBSD as the master with a slave running Mac OS X Server 10.6.8), so I leveraged that for my test environment.

For the purposes of this first attempt at using Puppet, I decided I would try to automate the configuration of NTP on the client systems. Before starting the testing, I took care of the SSL certificates (using puppet agent --test on the client side to perform an initial connection to the Puppet master server, then puppet cert sign on the Puppet master server to sign the client certificates). This accomplished two things:

  1. It verified that DNS resolution was working and that the clients could communicate with the Puppet master server.
  2. It took care of getting the SSL certificates signed and in place.

Now that all the preliminaries are out of the way, let’s get into my specific configuration, and what I want to share with you. If you’ve looked at Puppet at all, you’ll recall that the “trifecta,” so to speak, of Puppet manifests (a manifest is Puppet’s declarative configuration file) is the use of the file-package-service combination of resources, like this:

Obviously, this is just an example; you’d want/need to customize the values specified above for the various resources for your particular installation. The point is that it’s very common (as I understand it) to have this file-package-service combination in Puppet manifests.

This is all well and good, but what about when you need to manage multiple OSes with Puppet? What do you do then? Consider, for example, the OSes involved in my environment: OpenBSD, Ubuntu, CentOS. I know for certain that OpenBSD’s implementation of NTP uses a different configuration file than Ubuntu’s NTP package. How would one handle that?

Dominic Cleal’s blog gave me the kickstart I needed to get started down the path to a multi-OS manifest. At first I experimented with conditionals in the manifest, like this:

That worked fine for the file definition, but it broke down when I got to the package definition. Why? OpenBSD packages require that you define a source, but other platforms don’t necessarily need (or want) a source defined.

After futzing around with various conditionals, I finally read the rest of Dominic’s post, which suggested the use of subclasses for various operating systems. Now that I had read through pages and pages of Puppet manuals and visited site after site trying to figure out how to make this work, the idea of subclasses made sense. So I created an ntp module (or class) with subclasses named ntp::common and ntp::openbsd, using the syntax Dominic shared.

Sadly, it didn’t work. When the Puppet agent tried to apply the configuration, it complained that the “ntp.conf” file resource had already been defined once in the ntp::common section and couldn’t be defined again in the ntp::openbsd section.

Never one to give up so easily (and thanks for encouraging words on Twitter, Cody—”Simple is for the weak. You’ve got this!”), I turned to the #puppet IRC channel, where I was enlightened as to my mistake—my syntax was off. Instead of using the syntax to define a resource in the subclasses, I needed to use the syntax to refer back to an already-defined class.

Let’s assume I used this sort of definition in ntp::common:

If that’s the case, then this is the syntax I needed in the ntp::openbsd subclass:

See what that does? By using the File["ntp.conf"] syntax instead of the file { "ntp.conf" } syntax, I simply referred back to an existing resource and overrode the previously-defined values. This allowed me to create OS-specific subclasses with OS-specific settings. In turn, that allows me to manage different OSes using a single module with subclasses. If I ever need to add a new OS to be managed, I can simply add another subclass with the OS-specific settings and I’m off to the races.

Cool, huh? (And not too terribly shabby for a beginner, I think!)

If you’re interested, you can get the full manifest (or module) that I created here. You’ll note that I’m using the Puppet file serving mechanism, which I understand is not “best practices” any longer; I’ll probably update it soon to the new module-based mechanism. Puppet experts are encouraged to share any ideas/tips/tricks that might further improve this method, and everyone is welcome to share their (courteous) comments. Enjoy!

Tags: , , , ,

Using multiple layers of security has long been recognized as a useful strategy in hardening your computers against attack or exploit. In this post, I want to explain how to set up and configure the BSD-level ipfw firewall that is present in Mac OS X. While ipfw is certainly not a security panacea, it can be a solid part of a broader security strategy.

Setting up ipfw on Mac OS X has three basic steps:

  1. Create a shell script that launches ipfw.
  2. Create a configuration file that the shell script from step 1 uses when launching ipfw.
  3. Create a LaunchDaemon in Mac OS X that calls the shell script from step 1 to start and configure ipfw every time your Mac boots.

Let’s take a deeper look at each of these steps.

Create a Startup Shell Script

This part is harder than it sounds. At its most basic level, the script only needs to call /sbin/ipfw and a configuration file, like this:

#!/bin/sh

/sbin/ipfw -q /etc/ipfw.conf

I did quite a bit of digging to see if something more than that was suggested, and finally came up with this startup shell script:

#!/bin/sh
# Startup script for ipfw on Mac OS X

# Flush existing rules
/sbin/ipfw -f -q flush

# Silently drop unsolicited connections
/usr/sbin/sysctl -w net.inet.tcp.blackhole=2
/usr/sbin/sysctl -w net.inet.udp.blackhole=1

# Load the firewall ruleset
/sbin/ipfw -q /etc/ipfw.conf

This startup shell script can generally be put anywhere; I chose to put it in /usr/local/bin (which may not exist by default on your system).

With the startup shell script in place, you’re now ready to proceed to the next step, which is perhaps the most involved and detailed step in the process.

Create the Configuration File

The configuration file contains all of the firewall rule definitions for ipfw and is therefore one of the most complicated steps. This complexity is not because the configuration file itself is difficult, but rather because the rules that should be included will vary greatly from user to user and network to network. I strongly encourage you to do your own research to understand what sort of firewall rules are most appropriate in your environment and for your setup.

You can (theoretically) place the configuration file anywhere; I chose to place the file in /etc as ipfw.conf (very original, I know). If you do use something other than /etc/ipfw.conf, then adjust the startup shell script accordingly.

Rather than provide any sort of suggested firewall ruleset here, let me suggest some other sites that provide excellent information on suggested rulesets for ipfw:

Use a custom firewall in 10.5 with ipfw (Mac OS X Hints)

Setting up firewall rules on Mac OS X

Configuring IPFW Firewalls on OS X

From those sites—and there are many others besides just those—you should be able to put together an ipfw ruleset that is right for your network and your environment. Once you have the configuration file created and in place, then you’re ready for the final step: ensuring that ipfw launches automatically when you boot your Mac.

Create a LaunchDaemon

The final step is ensuring that ipfw launches automatically every time your Mac boots. This is accomplished by creating a text file—known as a property list file, or a plist file—with very specific contents into the /Library/LaunchDaemons folder.

Here’s a screenshot of my plist file, named com.apple.ipfw.plist (you can use a different name, like your own domain name, in the filename):

Don’t just copy and paste this file “as is” into your system! You’ll need to customize it to fit your system. Specifically, under the ProgramArguments key, the path to and name of the startup shell script should be adjusted to match the shell script you created earlier. In my case, the script is named ipfwstartup.sh and is found in /usr/local/bin. This startup shell script should, in turn, refer to the ipfw configuration file you created.

I believe—but I could be mistaken—that you’ll need to set ownership of the LaunchDaemon plist file to root:wheel. You can do this using the chown command in Terminal.

Once the plist file is in place, reboot your Mac. Once your Mac boots up and you’ve logged in, fire up the Terminal and run this command (you will need to use sudo if your account has administrative privileges; if your account doesn’t have administrative privileges, you should log in as an account that does in order to test things):

sudo ipfw list

This command should return the list of firewall rules you embedded in the configuration file. If it doesn’t, then go back and double-check your setup. Be sure that the plist file has the correct reference to the startup shell script, and that the startup shell script has the correct reference to the configuration file. You should also check to ensure that you made the startup shell script executable (using the chmod command).

If the command does return your firewall ruleset, then you’re all set.

Note that using ipfw does not in any way prevent you from using other firewalls—such as the built-in application-level firewall in Mac OS X—to further secure your system.

Questions? Comments? Clarifications? Please feel free to speak up in the comments below to add your thoughts.

Tags: , , ,

Switching to EagleFiler

Over the last month or so, I’ve taken a strong interest in moving a fair number of my files that are predominantly text-based back to “standards-based” formats such as RTF and plain text. I’ve started using Markdown as a means of storing formatting information in plain text files, and then using tools like Pandoc to convert these Markdown files into the desired destination format. I’ll likely discuss this in more detail in a future post, but what I wanted to discuss here was the affect of this decision on my software usage.

If you’ve read any of the posts I’ve published on my Getting Things Done setup, you’ll know that I used an application called Yojimbo as my “anything bucket.” Yojimbo is a native Mac OS X application that operated as part of the consumption phase of my workflow and provided a way for me to collect and organize all the various bits of information that pass in front of me. Yojimbo is a pretty handy application, and I made it even more handy with some home-grown AppleScripts that made it easier and faster to get information into and then back out of the application.

However, I recently started examining other applications in the same space as Yojimbo, in an effort to ensure that I was using the most effective tools possible. (Consider this a “sharpening the saw” exercise.) I evaluated DEVONthink Pro and EagleFiler, testing each of them within my workflow to see if either of them added some value above and beyond what I currently had with Yojimbo. This was occurring at the same time that I started shifting my text-based formats back to plain text, RTF, and Markdown, and so part of the evaluation process was testing how well those applications fit into this new way of managing my text-based data.

What I found, surprisingly, was that EagleFiler was a great fit for this new workflow. One of my long-time complaints of Yojimbo was that I couldn’t use my preferred applications (Skim for PDFs or TextMate for text-based files), an issue that was even more of a problem now that I was making greater use of TextMate with plain text files and Markdown. I explored ways of using AppleScript to modify Yojimbo’s behavior, but it was beyond my simple AppleScript skills. EagleFiler, on the other hand, simply leveraged the default applications I used with Mac OS X. PDFs opened in Skim, text files opened in TextMate (where I could then use TextMate bundles to convert formats between HTML, plain text, and Markdown), and RTF documents opened in Bean (which I’d adopted as a lightweight editor over the oh-so-bulky Microsoft Word). This made it a great fit for the new way I was working with documents. In addition, EagleFiler came with some useful capture functionality built-in, eliminating the need for some of my home-grown AppleScripts. Finally, EagleFiler used an “open” library format that stored my items as files in the file system. If, for whatever reason, I ever decided to ditch EagleFiler, all my information would be easily accessible. This was a real attraction for me.

So, after only a week or so of testing, I switched completely away from Yojimbo and started using EagleFiler instead. Thus far, I’ve been quite pleased with the results. While it seems simple, I like the ability to mark items as unread (something I couldn’t do in Yojimbo, so I had to approximate that functionality with certain tags). I still prefer the way Yojimbo displays metadata about bookmarks in the same window (in EagleFiler you have to open the Inspection window), but this has not been a significant problem.

I also anticipate that the use of the file system will make integrating tools like Pandoc into my workflow possible; it didn’t seem possible before with Yojimbo. Because EagleFiler’s library is file system-based, it should be possible to use AppleScript to manipulate records by manipulating the underlying files in the file system. This will be an area of exploration for me over the next few months as I also refine my Markdown-Pandoc workflows for document generation.

In my opinion, if you’re considering an “anything bucket” for your Mac to help keep your information organized, EagleFiler should definitely be on your list of applications to consider.

Tags: , , ,

Over the last day or so I’ve been messing around at the UNIX command line on my Mac, trying to find a workaround for a VPN policy that doesn’t allow split tunneling. (Just as a stupid side question, what is the security issue with split tunneling, anyway?) Along the way, I uncovered some handy commands for gathering information about the networking configuration of your Mac.

I can’t take credit for all of these; most of them were shared with me by Matt Cowger, fellow VCDX and vSpecialist.

If anyone has any additional commands they’d like to share, I encourage you to add them to the comments on this post. Enjoy!

To find the IP address of the default gateway:

netstat -nr -f inet | grep default | grep en | awk '{print $2}'

To find the interface name of the default route:

netstat -nr -f inet | grep default | grep en | awk '{print $6}'

To find the IP address assigned to the interface for the default gateway:

ORGGWIF=`netstat -nr -f inet | grep default | grep en | awk '{print $6}'`
ifconfig $ORGGWIF | grep "inet " | awk '{print $2}'

To find the default gateway network:

ORGGWIF=`netstat -nr -f inet | grep default | grep en | awk '{print $6}'`
netstat -I $ORGGWIF -n | grep -v : | grep $ORGGWIF | awk '{print $3}'

To find the subnet mask for the default gateway network:

ORGGWIF=`netstat -nr -f inet | grep default | grep en | awk '{print $6}'`
system_profiler SPNetworkDataType | grep -A 15 $ORGGWIF | grep "Subnet Masks" | awk '{print $3}'

To convert the subnet mask into CIDR format:

ORGGWIF=`netstat -nr -f inet | grep default | grep en | awk '{print $6}'`
ORGGWMASK=`system_profiler SPNetworkDataType | grep -A 15 $ORGGWIF | grep "Subnet Masks" | awk '{print $3}'`
echo obase=2.$ORGGWMASK | tr . \; | bc | tr -d 0\\n | wc -c | awk '{print $1}'

To determine the wireless SSID to which your Mac is currently associated:

/System/Library/PrivateFrameworks/Apple80211.framework/Versions/A/Resources/airport -I | grep SSID | tail -n 1 | awk '{print $2}'

CLI gurus and wizards are encouraged to share other useful commands in the comments below. Thanks!

Tags: , , ,

« Older entries