Writing

You are currently browsing articles tagged Writing.

Blog Migration in the Works

You might have noticed that blog content has been a bit sparse over the last few weeks. The reason I haven’t generated any new content is because all my spare time is taken up with preparing to migrate this site to a new hosting platform.

Sometime over the holiday season, I’ll be migrating this site from a hosted WordPress installation to Jekyll running on GitHub Pages. Given that I have 9 years of content (over 1,600 blog posts), this is a pretty fair amount of work.

Most of the “structural” work on the new site is already complete; you can get a preview of the site by visiting http://lowescott.github.io. There’s no content there yet (other than some boilerplate content), but you’ll be able to get a feel for how the new layout will look and work. As you can see, I’ll be using the Lanyon theme, which provides a nice clean layout and a good mobile as well as desktop experience.

There’s still some additional “structural” work to be done, such as adding support for comments (which will be handled via Disqus), but I hope to have that done in the next few days.

Once the migration actually happens (and I’ll post something letting everyone know it’s happened), the site URL and all the posts’ permalinks should be preserved, so you won’t need to change any bookmarks or anything like that. Obviously, if you find that something isn’t working, please let me know.

Thanks for reading!

Tags: , ,

For non-programmers, making a meaningful contribution to an open source project can be difficult; this is as true for OpenStack as for other open source projects. Documentation is a way to contribute, but in the case of OpenStack there is a non-trivial setup required in order to be able to contribute to the OpenStack documentation. In this post, I’m going to share how to set up the tools to contribute to OpenStack documentation in the hopes that it will help others get past the “barrier to entry” that currently exists.

I’ve long wanted to be more involved in supporting the OpenStack community, beyond my unofficial support via advocacy and blogging about OpenStack. I felt that documentation might be a way to achieve that goal. After all, I’ve written books and have been blogging for 9 years, so I should be able to add some value via documentation contributions. However, the toolchain that the OpenStack documentation uses requires a certain level of familiarity with development-focused tools, and the “how to” guides were less than ideal because of assumptions made regarding the knowledge level of new contributors. For these reasons, I felt that sharing how I (a non-programmer) set up the tools for contributing to OpenStack documentation might encourage other non-programmers to do the same, and thus get more people involved in the project.

This post is not intended to replace any official guides (like this one or this one), but rather to supplement such guides.

Using a Linux Environment

It’s no secret that I use OS X, and the toolchain that the OpenStack documentation team uses is—like the rest of OpenStack—pretty heavily biased toward Linux. One of the deterring factors for me was the level of difficulty to get these tools running on OS X. While it is possible to use the toolchain on OS X directly, it’s not necessarily simple or straightforward. It wasn’t until just recently that I realized: why not just use a Linux VM? Using a native Linux environment sidesteps all the messiness of trying to get a Linux-centric toolchain running on OS X (or Windows).

I initially started with a locally-hosted Linux VM on my MacBook Air, but after some consideration I decided to alter that approach and build my environment in a cloud VM running Ubuntu 12.04 on Amazon EC2 (via Ravello Systems).

The use of a cloud VM has some advantages and disadvantages:

  • Because the toolchain is heavily biased toward Linux, using a Linux-based cloud VM does simplify some things.
  • Cloud VMs will typically have much greater bandwidth, making it easier to install packages and pull down Git repositories. This was especially helpful this past week while I was traveling to the OpenStack Summit, where—due to slow conference wireless access—it took 2 hours to clone the GitHub repository for the OpenStack documentation (this is while I was trying to use a locally-hosted VM on my laptop).
  • It provides a clean separation between my local workstation and the development environment, meaning I can access my development environment from any system via SSH. This prevents me from having to keep multiple development environments in sync, which is something I would have had to do when working from my home office (where I use my MacBook Air as well as a Mac Pro).
  • The most significant drawback is that I cannot work on documentation patches when I have no network connectivity. For example, I’m writing this post as I’m traveling back from Paris. If I had a local development environment, I could assigned a few bugs to myself and worked on them while flying home. With a cloud VM-based environment, I cannot. I could have used a VM hosted on my local laptop (via VirtualBox or VMware Fusion), but that would have negated some of the other advantages.

In the end, you’ll have to evaluate for yourself what approach works best for you, your working style, and your existing obligations. However, the use of a Linux VM—local or via a cloud provider—can simplify the process of setting up the toolchain and getting started contributing, and it’s an approach I’d recommend for most anyone.

The information presented here assumes you are using an Ubuntu-based VM, either hosted with a cloud provider or on your local system. I’ll leave it to the readers to worry about the details of how to turn up the Ubuntu VM via their preferred mechanism.

Setting Up the Toolchain

Once you’ve established an Ubuntu Linux instance using the provider of your choice, use these steps to set up the documentation toolchain (note that, depending on the configuration of your Ubuntu Linux instance, you might need to preface these commands with sudo where appropriate):

  1. Update your system using apt-get update and apt-get -y upgrade. Depending on the updates that are installed, you might need to reboot.

  2. Generate an SSH keypair using ssh-keygen. Keep track of this keypair (and make note of the associated password, if you assigned one), as you’ll need it later. You can also re-use an existing keypair, if you’d like; I preferred to generate a keypair specifically for this purpose. If you are going to re-use an existing keypair, you’ll need to transfer them to this Linux instance via scp or your preferred mechanism.

  3. Install the necessary prerequisite packages (I’ve line wrapped the command below, denoted by a backslash, for readability):

    apt-get install maven git git-review python-pip libxml2-dev \
    libxslt1-dev python-dev gcc gettext zlib1g-dev
  4. Configure Git using git config to set your name and e-mail address. The e-mail address, in particular, should match what you’re using with the OpenStack Foundation. The commands to do this are git config --global
    user.name
    and git config --global user.email.

  5. Clone the GitHub repository where the documentation is found:

    git clone https://github.com/openstack/openstack-manuals.git
  6. Install the components required to run tests against the documentation sources (this command assumes that you cloned the repository into a subdirectory named “openstack-manuals”, which would be the default result of using the command above to clone the repository):

    pip install -r openstack-manuals/test-requirements.txt
  7. If you haven’t already established a Launchpad account, you’ll need one. Go create one (probably best done outside of the Ubuntu Linux environment) and make note of the username. Use your Launchpad account to verify that you can log in to both bugs.launchpad.net as well as review.openstack.org.

  8. While logged into review.openstack.org, go to the settings for your account and paste in the contents of the public key (not the private key) you generated earlier (or that you are re-using).

  9. Back in the Ubuntu Linux instance, set your Launchpad username:

    git config --global --add gitreview.username "<Launchpad username>"
  10. Change into the directory where you cloned the “openstack-manuals” repository and run git review -s. Assuming you already added the “gitreview.username” configuration parameter and that you’ve already uploaded the public key to your account on review.openstack.org, this should work without any issues. If you forgot to set “gitreview.username”, it should prompt you for your Launchpad username. If you forgot to upload the public SSH key, then it will probably error out.

  11. If git review -s completes without any obvious errors, you can double-check that a Gerrit upstream for the repository was added by using the command git remote -v. If you don’t see an upstream repo labeled gerrit, then something didn’t work. If you do see the upstream repo labeled gerrit, then you should be fine.

Once you’ve completed these steps, then you’re ready to start contributing to OpenStack documentation. I have a separate post planned that describes the process for actually contributing; stay tuned for that soon.

In the meantime, if anyone has any questions, corrections, or clarifications about the information in this post, please speak up in the comments below.

Tags: , , , , ,

One of the great things about this site is the interaction I enjoy with readers. It’s always great to get comments from readers about how an article was informative, answered a question, or helped solve a problem. Knowing that what I’ve written here is helpful to others is a very large part of why I’ve been writing here for over 9 years.

Until today, I’ve left comments (and sometimes trackbacks) open on very old blog posts. Just the other day I received a comment on a 4 year old article where a reader was sharing another way to solve the same problem. Unfortunately, that has to change. Comment spam on the site has grown considerably over the last few months, despite the use of a number of plugins to help address the issue. It’s no longer just an annoyance; it’s now a problem.

As a result, starting today, all blog posts more than 3 years old will automatically have their comments and trackbacks closed. I hate to do it—really I do—but I don’t see any other solution to the increasing blog spam.

I hope that this does not adversely impact my readers’ ability to interact with me, but it is a necessary step.

Thanks to all who continue to read this site. I do sincerely appreciate your time and attention, and I hope that I can continue to provide useful and relevant content to help make peoples’ lives better.

Tags: , ,

In just a few weeks, I’ll be participating in a book sprint to create a book that provides architecture and design guidance for building OpenStack-based clouds. For those of you who aren’t familiar with the idea of a book sprint, it’s been described to me like this:

  1. Take a group of people—a mix of technical experts and experienced writers—and lock them in a room.
  2. Don’t let them out until they’ve written a book.

(That really is how people have described it to me. I’ll share my experiences after it’s done.)

The architecture/design guide that results from this book sprint will join the results of previous book sprints: the operations guide and the security guide. (Are there others?)

This is my first book sprint, and I’m really excited to be able to participate. It will be great to have the opportunity to work with some very talented folks within the OpenStack community (some I’ve met/already know, others I will be meeting for the very first time). VMware has generously offered to host the book sprint, so we’ll be spending the week locked in a room in Palo Alto—but I’m hoping we might “need” to get some inspiration and spend some time outside on the campus!

Stay tuned for more details, as well as a post-mortem after the book sprint has wrapped up. Writing a book in five days is going to be a challenge, but I’m looking forward to the opportunity!

Tags: , ,

I don’t think that it is any secret that I’ve become a big fan of Markdown (MultiMarkdown to be precise; more on that in a moment) over the last couple of years. Likewise, it’s not really a secret that I’m an OS X user, although my “fanboi” status for OS X has waned a bit since the introduction of Lion and Mountain Lion (neither of which are as good, in my opinion, as Snow Leopard, but that’s an entirely different discussion—the jury is still out on Mavericks). In this post, I want to share with you a few Markdown tools for OS X that I’ve been using and that (hopefully) you’ll find helpful as well.

BBEdit

Because Markdown is just plain text, a good text editor is essential. For a long time, I was a diehard TextMate fan. It was lean, it was fast, and the syntax highlighting for Markdown was really good. Over time, though, as TextMate faded and waned, I found myself in a position of needing to find a new text editor. (Yes, I looked at the TextMate 2.0 alpha releases. I didn’t really like where the project was headed, though I’m sure that it’s perfectly fine for many folks.) I played with a variety of text editors, and finally settled on BBEdit, from Bare Bones Software.

One of the key things that attracted me to BBEdit was its extensibility. (This included fairly extensive AppleScript support.) For example, once I’d wrapped my head around how BBEdit works (which meant unwrapping my head from how TextMate worked), it was relatively trivial to incorporate additional AppleScripts and even UNIX scripts into my workflow (here’s a non-Markdown example). I’m still uncovering tips and tricks for maximizing my use of BBEdit, but if you’re in the market for a good text editor to use with Markdown, I’d recommend giving BBEdit a serious look.

MultiMarkdown

It didn’t take me too long to stop relying on built-in Markdown processors in my text editors and switch to the standalone MultiMarkdown processor from MultiMarkdown’s creator, Fletcher Penny. In addition to a few syntax additions, MultiMarkdown (MMD) adds more output options—not just HTML, but also OPML and ODF. (ODF can then be easily transformed into RTF or DOC/DOCX using a built-in tool from OS X named textutil.)

Because of BBEdit’s extensibility—specifically, because of the ability to integrate UNIX scripts into the application—it was a piece of cake to use BBEdit for writing the Markdown, then leverage the standalone MMD processor to convert to the final output format.

One other thing I really like about MultiMarkdown is the ability to add metadata to the document. This allows you to embed keywords, author name, document title, project, affiliation, dates, etc., into the document itself. This has two benefits:

  1. It makes it easier to find the right original documents, since each document now carries additional information that you can use in Spotlight.
  2. When converting to HTML, this information gets embedded into the HTML header.

To make embedding the Markdown metadata easier, I use a Typinator snippet.

QuickLook Generator

Available here, this QuickLook generator allows you to preview Markdown/MultiMarkdown files using OS X’s QuickLook functionality. This might not seem like a big deal, but I’ve actually found it quite useful.

Pandoc

Pandoc, from John McFarlane, is a treasure trove of document conversion functionality. It supports an impressive list of both input and output formats. While the standalone MultiMarkdown processor handles most of what I need, having Pandoc in my pocket when I need some additional conversion options is very powerful. For example, I can (and sometimes do) use Pandoc to handle automatic conversions from Markdown/MultiMarkdown directly to Rich Text Format (RTF) or DOC/DOCX. Pandoc also allows you to specify custom templates, so that you could create custom RTF templates that would control how the final RTF document is formatted. All in all, a very powerful tool.

Marked

Marked is a relatively new addition to my Markdown toolset, although it’s been around for a while. The premise behind Marked is simple: preview any Markdown (or MultiMarkdown) document. It turns out this functionality is quite useful (more so than I thought it would be, to be honest). Brett Terpstra, the developer behind Marked, has added features like readability statistics, keyword highlighting, and repetitive word visualization to make Marked even more useful. Another feature I’m finding very handy is the ability to easily export from Marked in PDF, something that is possible using other tools but not nearly as easy. Now, if only I could control Marked via AppleScript, so I could automate that process…that would be ideal.

I hope this information is useful to someone. I’ve found that using Markdown via some of these tools has really helped my productivity; perhaps you will find the same thing. As always, feel free to post any courteous comments, corrections, or clarifications below.

Tags: , ,

No Man is an Island

The phrase “No man is an island” is attributed to John Donne, an English poet who lived in the late 1500s and early 1600s. The phrase comes from his Meditation XVII, and was borrowed later by Thomas Merton to become the title of a book he published in 1955. In both cases, the phrase is used to discuss the interconnected nature of humanity and mankind. (Side note: the phrase “for whom the bell tolls” comes from the same origin.)

What does this have to do with IT? That’s a good question. As I was preparing to start the day today, I took some time to reflect upon my career; specifically, the individuals that have been placed in my life and career. I think all people are prone to overlook the contributions that others have played in their own successes, but I think that IT professionals may be a bit more affected in this way. (I freely admit that, having spent my entire career as an IT professional, my view may be skewed.) So, in the spirit of recognizing that no man is an island—meaning that who we are and what we accomplish are intricately intertwined with those around us—I wanted to take a moment and express my thanks and appreciation for a few folks who have helped contribute to my success.

So, who has helped contribute to my achievements? The full list is too long to publish, but here are a few notables that I wanted to call out (in no particular order):

  • Chad Sakac took the opportunity to write the book that would become Mastering VMware vSphere 4 and gave it to me instead. (If you aren’t familiar with that story, read this.)
  • My wife, Crystal, is just awesome—she has enabled and empowered me in many, many ways. ‘Nuff said.
  • Forbes Guthrie allowed me to join him in writing VMware vSphere Design (as well as the 2nd edition), has been a great contributor to the Mastering VMware vSphere series, and has been a fabulous co-presenter at the last couple VMworld conferences.
  • Chris McCain (who recently joined VMware and has some great stuff in store—stay tuned!) wrote Mastering VMware Infrastructure 3, the book that I would revise to become Mastering VMware vSphere 4.
  • Andy Sholomon, formerly with Cisco and now with VCE, was kind enough to provide some infrastructure for me to use when writing Mastering VMware vSphere 5. Without it, writing the book would have been much more difficult.
  • Rick Scherer, Duncan Epping, and Jason Boche all served as technical editors for various books that I’ve written; their contributions and efforts helped make those books better.

To all of you: thank you.

The list could go on and on and on; if I didn’t expressly call your name out, please don’t feel bad. My point, though, is this: have you taken the time recently to thank others in your life that have contributed to your success?

Tags: , , ,

Well, the title kind of says it all—yes, there will be an update to the popular Mastering VMware vSphere 5 book, with new content for the vSphere 5.5 release announced today at VMworld. Availability of the new title is expected in late October or early November, but I believe it’s already available for pre-order on Amazon.

However, this book represents more than just another title in the Mastering VMware vSphere series. It represents a “changing of the guard,” so to speak. In order to understand why, allow me to share with you the story of how my very first book, Mastering VMware vSphere 4, came to be. This is a story that very few people know.

I suppose the story starts in 2008. As a blogger, I’d gained some visibility as a result of my liveblogging at VMworld 2007, and in 2008 I met Chad Sakac. Chad is now a hugely popular figure within the VMware community, but at the time he was “just” the leader of a little-known group within EMC. (This is the group that would later become known as the vSpecialists.) Chad and I chatted, geeked out on some VMware stuff, and became buddies. I didn’t really think too much about the connection; we were just a couple of virtualization geeks making a connection.

Then came early 2009. Chad contacted me, and said he’d been approached about writing a book. Unfortunately, he was unable to write the book; would I be interested in writing it, he asked. Heck yeah! Writing a book had been an item on my bucket list for a really long time. So Chad made the connections to Wiley/Sybex, contracts were signed, and at VMworld 2009 Mastering VMware vSphere 4 was released, quickly becoming a massive hit. I joined Chad’s vSpecialist team in 2010, and remained there for 3 years before transitioning to VMware earlier this year to focus on network virtualization.

Looking back on this series of events, it’s easy to see that this was a huge opportunity I’d received. So, when it came time to discuss writing the book that would become Mastering VMware vSphere 5.5, I asked myself: “I was just given this opportunity. Can I do the same for someone else? Can I pay this forward?” It was at that point I decided that I would not be the lead author for the next revision; instead, I would pass the torch on to someone else—someeone else who had “Write a book” on their bucket list. I wanted to give someone else the same opportunity I’d been given.

This is why Mastering VMware vSphere 5.5 represents a changing of the guard. This book embodies my decision to pay it forward, to give another worthy individual—in this case, Nick Marshall—the opportunity to do something he really wanted to do. Many of you know Nick; he’s been involved in the VMware community in a number of ways, through support of the vBrownbag Podcast as well as through his work with Alastair Cooke on AutoLab. In addition to that, he’s just a really nice guy, and that counts for something, too. I’m really thrilled that Nick has taken the lead with this book, and I’m really looking forward to seeing where he takes the Mastering VMware vSphere series.

So, if you’re looking for an authoritative reference to the vSphere 5.5 release, I would encourage you to pick up Mastering VMware vSphere 5.5. Nick and I, along with our co-conspirators Forbes Guthrie, Josh Atwell, and Matt Liebowitz, have done our best to produce something that is useful and informative. I hope that you agree.

Tags: , ,

A Short Dry Spell

It’s been a couple of weeks since I published anything here, so I wanted to just provide a brief update. I know that posting something about why I haven’t posted something is…odd, I guess you could say. In any case, a number of factors—some personal, some professional—have contributed to why I haven’t been able to generate some useful new content in the last couple of weeks. Of course, there are blogs that go for months between posts, so a small gap of a couple weeks isn’t really a big deal. After over 8 years (I started blogging in May 2005), I think you can rest assured that I have no intention of shutting down anytime soon.

However, despite this dry spell, I am determined to continue with my Learning NVP blog series (part 1 is here), as a great many people have expressed interest. Fortunately, I made some headway on some blockers that were preventing progress, and that gives me hope for new content soon. I’m also exploring new posts on Puppet, Open vSwitch (OVS), OpenFlow, and—who knows—maybe VMware will have some snazzy announcements at VMworld that will give me new fodder for posts. We’ll see.

So, bear with me as I work through this short bout of writers’ block. I hope to be back soon with some new content. Thanks!

Tags: , ,

Welcome to Technology Short Take #33, the latest in my irregularly-published series of articles discussing various data center technology-related links, articles, rants, thoughts, and questions. I hope that you find something useful here. Enjoy!

Networking

  • Tom Nolle asks the question, “Is virtualization reality even more elusive than virtual reality?” It’s a good read; the key thing that I took away from it was that SDN, NFV, and related efforts are great, but what we really need is something that can pull all these together in a way that customers (and providers) reap the benefits.
  • What happens when multiple VXLAN logical networks are mapped to the same multicast group? Venky explains it in this post. Venky also has a great write-up on how the VTEP (VXLAN Tunnel End Point) learns and creates the forwarding table.
  • This post by Ranga Maddipudi shows you how to use App Firewall in conjunction with VXLAN logical networks.
  • Jason Edelman is on a roll with a couple of great blog posts. First up, Jason goes off on a rant about network virtualization, briefly hitting topics like the relationship between overlays and hardware, the role of hardware in network virtualization, the changing roles of data center professionals, and whether overlays are the next logical step in the evolution of the network. I particularly enjoyed the snippet from the post by Bill Koss. Next, Jason dives a bit deeper on the relationship between network overlays and hardware, and shares his thoughts on where it does—and doesn’t—make sense to have hardware terminating overlay tunnels.
  • Another post by Tom Nolle explores the relationship—complicated at times—between SDN, NFV, and the cloud. Given that we define the cloud (sorry to steal your phrase, Joe) as elastic, pooled resources with self-service functionality and ubiquitous access, I can see where Tom states that to discuss SDN or NFV without discussing cloud is silly. On the flip side, though, I have to believe that it’s possible for organizations to make a gradual shift in their computing architectures and processes, so one almost has to discuss these various components individually, because to tie them all together makes it almost impossible. Thoughts?
  • If you haven’t already introduced yourself to VXLAN (one of several draft protocols used as an overlay protocol), Cisco Inferno has a reasonable write-up.
  • I know Steve Jin, and he’s a really smart guy. I must disagree with some of his statements regarding what software-defined networking is and is not and where it fits, written back in April. I talked before about the difference between network virtualization and SDN, so no need to mention that again. Also, the two key flaws that Steve identifies—single point of failure and scalability—aren’t flaws with SDN/network virtualization, but rather flaws in an implementation of said technologies, IMHO.

Servers/Hardware

  • Correction from the last Technology Short Take—I incorrectly stated that the HP Moonshot offerings were ARM-based, and therefore wouldn’t support vSphere. I was wrong. The servers (right now, at least) are running Intel Atom S1260 CPUs, which are x86-based and do offer features like Intel VT-x. Thanks to all who pointed this out, and my apologies for the error!
  • I missed this on the #vBrownBag series: designing HP Virtual Connect for vSphere 5.x.

Security

Cloud Computing/Cloud Management

  • Hyper-V as hypervisor with OpenStack Compute? Sure, see here.
  • Cody Bunch, who has been focusing quite a bit on OpenStack recently, has a nice write-up on using Razor and Chef to automate an OpenStack build. Part 1 is here; part 2 is here. Good stuff—keep it up, Cody!
  • I’ve mentioned in some of my OpenStack presentations (see SpeakerDeck or Slideshare) that a great place to start if you’re just getting started is DevStack. Here, Brent Salisbury has a nice write-up on using DevStack to install OpenStack Grizzly.

Operating Systems/Applications

  • Boxen, a tool created by GitHub to manage their OS X Mountain Lion laptops for developers, looks interesting. Might be a useful tool for other environments, too.
  • If you use TextMate2 (I switched to BBEdit a little while ago after being a long-time TextMate user), you might enjoy this quick post by Colin McNamara on Puppet syntax highlighting using TextMate2.

Storage

  • Anyone have more information on Jeda Networks? They’ve been mentioned a couple of times on GigaOm (here and here), but I haven’t seen anything concrete yet. Hey, Stephen Foskett, if you’re reading: get Jeda Networks to the next Tech Field Day.
  • Tim Patterson shares some code from Luc Dekens that helps check VMFS version and block sizes using PowerCLI. This could come in quite handy in making sure you know how your datastores are configured, especially if you are in the midst of a migration or have inherited an environment from someone else.

Virtualization

  • Interested in using SAML and Horizon Workspace with vCloud Director? Tom Fojta shows you how.
  • If you aren’t using vSphere Host Profiles, this write-up on the VMware SMB blog might convince you why you should and show you how to get started.
  • Michael Webster tackles the question: is now the best time to upgrade to vSphere 5.1? Read the full post to see what Michael has to say about it.
  • Duncan points out an easy error to make when working with vSphere HA heartbeat datastores in this post. Key takeaway: sometimes the fix is a lot simpler than we might think at first. (I know I’m guilty of making things more complicated than they need to be at times. Aren’t we all?)
  • Jon Benedict (aka “Captain KVM”) shares a script he wrote to help provide high availability for RHEV-M.
  • Chris Wahl has a nice write-up on using log shipping to protect your vCenter database. It’s a bit over a year old (surprised I missed it until now), and—as Chris points out—log shipping doesn’t protect the database (primary and secondary copies) against corruption. However, it’s better than nothing (which I suspect it what far too many people are using).

Other

  • If you aspire to be a writer—whether that be a blogger, author, journalist, or other—you might find this article on using the DASH method for writing to be helpful. The six tips at the end of the article are especially helpful, I think.

Time to wrap this up for now; the rest will have to wait until the next Technology Short Take. Until then, feel free to share your thoughts, questions, or rants in the comments below. Courteous comments are always welcome!

Tags: , , , , , , , , , , , , , ,

In some of the presentations that I give on productivity and efficiency, one of the things I mention is reducing the friction; that is, making processes more streamlined so they’re easier to perform. In this post, I’m going to describe one way I reduced the friction for producing and publishing blog posts using BBEdit, TextSoap, MarsEdit, and some AppleScript.

It’s no secret that I’ve become a huge fan of Markdown, the human-readable plain text “markup” format created by Jon Gruber. The vast majority of all my content is now created in Markdown, and then converted to RTF (to share with my Office-using co-workers), PDF (for broader publication), or HTML (for publishing online). Until very recently, my blog publishing process looked something like this:

  1. Write the blog post in Markdown, using TextMate.
  2. Using a built-in Markdown binary in TextMate, convert the Markdown into HTML.
  3. Run the raw HTML through TextSoap (very handy tool) to remove smart quotes and curly apostrophes.
  4. Paste the parsed HTML into MarsEdit for publication to my blog.

While it seems complicated, it wasn’t terribly complicated—but it wasn’t as seamless as it could be. So, I set out to improve the process. The first big change was a switch from TextMate to BBEdit, which is more extensible (and kept up to date by the developer). That change allowed me to do two things:

  • Switch from the built-in Markdown support in TextMate to using a separate (and more up-to-date) MultiMarkdown binary maintained by Fletcher Penny.
  • Introduce an AppleScript (BBEdit has outstanding support for AppleScript) to automate some portion of the process.

My first pass at automating the process just got me back to where I was before—writing Markdown in BBEdit, converting to HTML, cleaning the HTML with TextSoap, and pasting into MarsEdit. Not too impressive, but acceptable, and a process with which I was familiar. I stuck with that process for a while, primarily because it was a known entity. A couple of days ago, though, I asked myself: Can I do better? Can I be more efficient?

So, my second pass at automating the process is much more comprehensive. The AppleScript I wrote as a result of challenging myself to reduce the friction does the following:

  • Takes the Markdown from BBEdit and converts it to HTML.
  • Using the HTML produced by the standalone MultiMarkdown binary, it then calls TextSoap to (in the background) clean the HTML according to a custom cleaner I’d created (the custom cleaner, called “Replace HTML Entities,” just replaces curly quotes and curly apostrophes, which don’t translate well on my site).
  • Creates a new, blank blog post in MarsEdit, into which it pastes the cleaned HTML as the body of the post.

I store the script in BBEdit’s scripts folder, which means I can invoke the script easily from within BBEdit.

Here’s the AppleScript itself (click here if the script doesn’t show up):

Now, I can write my Markdown in BBEdit, invoke the script, and get dropped out to HTML code sitting in a new blog post in MarsEdit. All I need to do to publish the post at that point is supply the metadata (tags, categories, title, excerpt) and click Send to Blog. Done. (I used this process for this post, in fact.) How’s that for reduced friction?

Tags: , , ,

« Older entries