Web

You are currently browsing articles tagged Web.

It seems as if APIs are popping up everywhere these days. While this isn’t a bad thing, it does mean that IT professionals need to have a better understanding of how to interact with these APIs. In this post, I’m going to discuss how to use the popular command line utility curl to interact with a couple of RESTful APIs—specifically, the OpenStack APIs and the VMware NSX API.

Before I go any further, I want to note that to work with the OpenStack and VMware NSX APIs you’ll be sending and receiving information in JSON (JavaScript Object Notation). If you aren’t familiar with JSON, don’t worry—I’ve have an introductory post on JSON that will help get you up to speed. (Mac users might also find this post helpful as well.)

Also, please note that this post is not intended to be a comprehensive reference to the (quite extensive) flexibility of curl. My purpose here is to provide enough of a basic reference to get you started. The rest is up to you!

To make consuming this information easier (I hope), I’ll break this information down into a series of examples. Let’s start with passing some JSON data to a REST API to authenticate.

Example 1: Authenticating to OpenStack

Let’s say you’re working with an OpenStack-based cloud, and you need to authenticate to OpenStack using OpenStack Identity (“Keystone”). Keystone uses the idea of tokens, and to obtain a token you have to pass correct credentials. Here’s how you would perform that task using curl.

You’re going to use a couple of different command-line options:

  • The “-d” option allows us to pass data to the remote server (in this example, the remote server running OpenStack Identity). We can either embed the data in the command or pass the data using a file; I’ll show you both variations.
  • The “-H” option allows you to add an HTTP header to the request.

If you want to embed the authentication credentials into the command line, then your command would look something like this:

curl -d '{"auth":{"passwordCredentials":{"username": "admin",
"password": "secret"},"tenantName": "customer-A"}}'
-H "Content-Type: application/json" http://192.168.100.100:5000/v2.0/tokens

I’ve wrapped the text above for readability, but on the actual command line it would all run together with no breaks. (So don’t try to copy and paste, it probably won’t work.) You’ll naturally want to substitute the correct values for the username, password, tenant, and OpenStack Identity URL.

As you might have surmised by the use of the “-H” header in that command, the authentication data you’re passing via the “-d” parameter is actually JSON. (Run it through python -m json.tool and see.) Because it’s actually JSON, you could just as easily put this information into a file and pass it to the server that way. Let’s say you put this information (which you could format for easier readability) into a file named credentials.json. Then the command would look something like this (you might need to include the full path to the file):

curl -d @credentials.json -H "Content-Type: application/json" http://192.168.100.100:35357/v2.0/tokens

What you’ll get back from OpenStack—assuming your command is successful—is a wealth of JSON. I highly recommend piping the output through python -m json.tool as it can be difficult to read otherwise. (Alternately, you could pipe the output into a file.) Of particular usefulness in the returned JSON is a section that gives you a token ID. Using this token ID, you can prove that you’ve authenticated to OpenStack, which allows you to run subsequent commands (like listing tenants, users, etc.).

Example 2: Authenticating to VMware NSX

Not all RESTful APIs handle authentication in the same way. In the previous example, I showed you how to pass some credentials in JSON-encoded format to authenticate. However, some systems use other methods for authentication. VMware NSX is one example.

In this example, you’ll need to use a different set of curl command-line options:

  • The “–insecure” option tells curl to ignore HTTPS certificate validation. VMware NSX controllers only listen on HTTPS (not HTTP).
  • The “-c” option stores data received by the server (one of the NSX controllers, in this case) into a cookie file. You’ll then re-use this data in subsequent commands with the “-b” option.
  • The “-X” option allows you to specify the HTTP method, which normally defaults to GET. In this case, you’ll use the POST method along the the “-d” parameter you saw earlier to pass authentication data to the NSX controller.

Putting all this together, the command to authenticate to VMware NSX would look something like this (naturally you’d want to substitute the correct username and password where applicable):

curl --insecure -c cookies.txt -X POST -d 'username=admin&password=admin' https://192.168.100.50/ws.v1/login

Example 3: Gathering Information from OpenStack

Once you’ve gotten an authentication token from OpenStack as I showed you in example #1 above, then you can start using API requests to get information from OpenStack.

For example, let’s say you wanted to list the instances for a particular tenant. Once you’ve authenticated, you’d want to get the ID for the tenant in question, so you’d need to ask OpenStack to give you a list of the tenants (you’ll only see the tenants your credentials permit). The command to do that would look something like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:5000/v2.0/tenants

The value to be substituted for token ID in the above command is returned by OpenStack when you authenticate (that’s why it’s important to pay attention to the data being returned). In this case, the data returned by the command will be a JSON-encoded list of tenants, tenant IDs, and tenant descriptions. From that data, you can get the ID of the tenant for whom you’d like to list the instances, then use a command like this:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers

This will return a stream of JSON-encoded data that includes the list of instances and each instance’s unique ID—which you could then use to get more detailed information about that instance:

curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers/<Server ID>

By and large, the API is reasonably well-documented; you just need to be sure that you are pointing the API call against the right endpoint. For example, authentication has to happen against the server running Keystone, which may or may not be the same server that is running the Nova API services. (In the examples I just provided, Keystone and the Nova API services are running on the same host, which is why the IP address is the same in the command lines.)

Example 4: Creating Objects in VMware NSX

Getting information from VMware NSX using the RESTful API is very much like what you’ve already seen in getting information from OpenStack. Of course, the API can also be used to create objects. To create objects—such as logical switches, logical switch ports, or ACLs—you’ll use a combination of curl options:

  • You’ll use the “-b” option to pass cookie data (stored when you authenticated to NSX) back for authentication.
  • The “-X” option allows you to specify the HTTP method (in this case, POST).
  • The “-d” option lets us transfer JSON-encoded data to form the request for the object we are going to create. We’ll specify a filename here, preceded by the “@” symbol.
  • The “-H” option adds an appropriate “Content-Type: application/json” header to the request, since we are passing JSON-encoded data to the NSX controller.

When you put it all together, it looks something like this (substituting appropriate values where applicable):

curl --insecure -b cookies.txt -d @new-switch.json
-H "Content-Type: application/json" -X POST https://192.168.100.50/ws.v1/lswitch

As I mentioned earlier, you’re passing JSON-encoded data to the NSX controller; here are the contents of the new-switch.json file referenced in the above command example:

If you can’t see the code block, please click here.

Once again, I recommend piping the output of this command through python -m json.tool, as what you’ll get back on a successful call is some useful JSON data that includes, among other things, the UUID of the object (logical switch, in this case) that you just created. You can use this UUID in subsequent API calls to list properties, change properties, add logical switch ports, etc.

Clearly, there is much more that can be done with the OpenStack and VMware NSX APIs, but this at least should give you a starting point from which you can continue to explore in more detail. If anyone has any corrections, clarifications, or questions, please feel free to post them in the comments section below. All courteous comments (with vendor disclosure, where applicable) are welcome!

Tags: , , , ,

Divorcing Google

The time has come; all good things must come to an end. So it is with my relationship with Google and the majority of their online services. As of right now, I’m in the midst of separating myself from the majority of Google’s services. I’ve mentioned this several times on Twitter, and a number of people asked me to write about the process. So, here are the details so far.

The first question that usually comes up is, “Why leave Google?” That’s a fair question. There is no one reason, but rather a number of different factors that contributed to my decision:

  • Google kills off services seemingly on a whim. What if a service I’m come to use quite heavily is no longer valuable to Google? That was the case with Google Reader, a service for which I still haven’t found a reasonable alternative. (Feedly is close.)
  • Google is closing off their ecosystem. Everything ties back to Google+, even if you don’t want anything to do with Google+. Communications with Google Talk to external XMPP-based services no longer works, which means you can’t use Google Talk to communicate with other users using XMPP (only other Google Talk users).
  • Support for XMPP clients will stop working in May 2014 (which, in turn, will cause a number of other things to stop working). One thing that will be affected is the ability to use an Obihai device to connect to Google Voice, which will no longer work after this change.
  • The quality and reliability of their free service tiers isn’t so great (in my experience), and their paid service tiers aren’t price competitive in my opinion.
  • Google’s non-standard IMAP implementation is horribly, awfully slow.
  • Finally, Google is now doing things they said they’d never do (like putting banner ads in search results). What’s next?

Based on these factors, I made the decision to switch to other services instead of using Google. Here are the services that I’ve settled on so far:

  • For search, I’m using a combination of DuckDuckGo (for general searching) and Bing Images (for image searches). Bing Image Search is actually quite nice; it allows you to search according to license (so that you can find images that you are legally allowed to re-use).
  • For e-mail, I’m using Fastmail. Their IMAP service rocks and is noticeably faster than anything I’ve ever seen from Google. The same goes for their web-based interface, which is also screaming fast (and quite pleasant to use). The spam protection isn’t quite as good as Google’s, but I’m still in the process of training my Bayes database. I anticipate that it will improve over time.
  • For IM, I’m using Hosted.IM and Fastmail, both of which are XMPP-based. I’ll use Hosted.IM for one domain where my username contains a dot character; this isn’t supported on Fastmail. All other domains will run on a Fastmail XMPP server.
  • For contact and calendar syncing, I’m using Fruux. Fruux supports CardDAV and CalDAV, both of which are also supported natively on OS X and iOS (among other systems). Support for CardDAV/CalDAV on Android is also available inexpensively.

That frees me up from GMail, Google Calendar, Google Talk, and Google Contacts. I’ve never liked or extensively used Google Drive (Dropbox is miles ahead of Google Drive, in my humble opinion) or Google Docs, so I don’t really have to worry about those.

There are a couple of services for which I haven’t yet found a suitable replacement; for example, I haven’t yet found a replacement for Google Voice. I’m looking at SIP providers for my home line, but haven’t made any firm decisions yet. I also haven’t found a replacement for FeedBurner yet.

Also, I won’t be able to completely stop using Google services; since I own an Android phone, I have to use Google Play Store and Google Wallet. Since I don’t have a replacement (yet) for Google Voice, I have a single Google account that I use for these services as well as for IM to Google Talk contacts (since I can’t use XMPP to communicate with them). Once Google Voice is replaced, I’ll be down to using only Google Play, Google Wallet, and Google Talk.

So, that’s where things stand. I’m open to questions, thoughts, or suggestions for other services I should investigate. Just speak up in the comments below. All courteous comments are welcome!

Tags: , , ,

In this post I’m going to show you how to make JSON (JavaScript Object Notation) output more readable using a BBEdit Text Filter. This post comes out of some recent work I’ve done in learning how to interact with various REST APIs. My initial REST API explorations have focused on the NVP/NSX API, but I plan to soon expand my explorations to include other APIs, like OpenStack.

<aside>You might be wondering why I’m exploring REST APIs and stuff like JSON. I believe that having a better understanding of the APIs these products use will help drive a deeper and more complete understanding of the underlying products. I could be wrong…time will tell.</aside>

BBEdit Text Filters, as you may already know, simply take the current text (or selected text) in BBEdit, do something to it, and then output the result. The “do something to it” is, of course, the magic. You can, for example—and this something that I do—use the MultiMarkdown command-line executable to transform a (Multi)Markdown document in BBEdit to HTML. All that is required is to place the script (or a link to the script) in the ~/Library/Application Support/BBEdit/Text Filters directory. The script just needs to accept input on STDIN, transform it in whatever way you want, and spit out the results on STDOUT. BBEdit does the rest.

In this case, you’re going to use an extremely simple Bash shell script containing a single Python command to transform JSON-serialized output into a more human-readable format.

First, let’s take a look at some JSON-serialized output. Here’s the output from an API call to NVP/NSX to list the logical switches:

(To view the information if the code block isn’t available, click here.)

It is human-readable, but just barely. How can we make this a bit easier for humans to read and parse? Well, it turns out that OS X (and probably most recent flavors of Linux) come with a version of Python pre-installed, and the pre-installed version of Python comes with the ability to “prettify” (make more human readable) JSON text. (In the case of OS X 10.8 “Mountain Lion”, the pre-installed version of Python is version 2.7.2.) With grateful thanks to the folks on Twitter who introduced me to this trick, the command you would use in this instance is as follows:

python -m json.tool

Very simple, right? To turn this into a BBEdit Text Filter, we need only wrap this into a very simple shell script, such as this:

(If you aren’t able to see the code block above, please click here.)

Place this script (or a link to this script) in the ~/Library/Application Support/BBEdit/Text Filters directory, restart BBEdit, and you should be good to go. Now you can copy and paste the output from an API call like the output above, run it through this text filter, and get output that looks like this:

(Click here if the code block above isn’t visible.)

Given that I’m new to a lot of this stuff, I’m sure that I have probably overlooked something along the way. There might be better and/or more efficient ways of handling this, or better tools to use. If you have any suggestions on how to improve any of this—or just suggestions on how I might do better in my API explorations—feel free to speak out in the comments below.

Tags: , , , , ,

As part of some work I’ve been doing to stretch myself and my boundaries, I’ve recently started diving a bit deeper into working with REST APIs. As I started this exploration, one thing that kept coming up again and again was JSON. In this post, I’m going to try to provide an introduction to JSON for non-programmers (like me).

Let’s start with the acronym: “JSON” stands for “JavaScript Object Notation”. It’s a lightweight, text-based format, and is frequently used in conjunction with REST APIs and web-based services. You can find more details on the specifics of the JSON format at the JSON web site.

The basic structures of JSON are:

  • A set of name/value pairs
  • An ordered list of values

Now, that sounds simple enough, but let’s look at some examples to really bring this home. The examples that I’ll use are taken from API responses in my virtualized NVP/NSX lab using the NVP/NSX API.

First, here’s an example of a set of name/value pairs (I’ve taken the liberty of making the raw output from the API calls more readable for clarity’s sake; raw JSON data typically wouldn’t have line returns or whitespace):

(Click here if you don’t see a code block above.)

Let’s break that down a bit:

  • Each object is surrounded by curly braces (referred to just as braces by some). The entire JSON response is itself an object—at least this is how I view it—so it is surrounded by braces. It contains three objects, which are part of the “results” array (more on that in just a second).
  • Each object may have multiple name/value pairs separated by a comma. Name/value pairs may represent a single value (as with “result_count”) or multiple values in an array (as with “results”). So, in this example, there are two name/value pairs: one named “result_count” and one named “results”. Note the use of the colon separating the name from the associated value(s).
  • The second item (object, if you will) in the API response is named “results”, but note that its value isn’t a single value; rather, it’s an array of values. Arrays are surrounded by brackets, and each element/item in the array is separated by a comma. In this particular case—and this will vary from API to API, as far as I know—note that the “result_count” value tells you exactly how many items are in the “results” array, making it incredibly easy to iterate through the items in the array.
  • In the “results” array, there are three items (or objects). Each of these items—each surrounded by braces—has three name/value pairs, separated by commas, with a colon separating the name from the value.

As you can see, JSON has a very simple structure and format, once you’re able to break it down.

There are numerous other examples and breakdowns of JSON around the web; here are a few that I found helpful in my education (which is still ongoing):

JSON Basics: What You Need to Know
JSON: What It Is, How It Works, & How to Use It (This one gets a bit deep for non-programmers, but you might find it helpful nevertheless.)
JSON Tutorial

You may also see the term “JSON-serialized”; this generally refers to data that has been formatted as JSON. To JSON-serialize data means to put it into JSON format; to deserialize JSON data means to parse (or deconstruct) the JSON output into some other format.

I’m sure there’s a great deal more that could (and perhaps should) be said about JSON, but I did say this would be a non-programmer’s introduction to JSON. If you have any questions, thoughts, suggestions, or clarifications, please feel free to speak up in the comments below.

UPDATE: I’ve edited the text above based on some feedback in the comments. Thanks for your feedback; the post is better for it!

Tags: , ,

A couple of days ago I wrote about how to use the UNIX CLI in Mac OS X to shorten a URL via bit.ly, while adding the URL to your link history in case you want to re-use it in the future. Now I’m going to take that information and show you how to further integrate this into your Mac’s environment using AppleScript and Automator.

The necessary glue here are these two facts:

  1. AppleScript can execute a shell script using do shell script; this is what allows us to leverage the curl command I discussed in the previous post from within AppleScript.
  2. Automator can execute AppleScripts via the Run AppleScript action. This allows us to take the AppleScript (which is executing the shell script) and embed it into an Automator workflow.

To give credit where credit is due, this isn’t my idea at all; I’ve derived all this information from this post by David Poindexter. His shell command is different and doesn’t populate the user’s link history, but it does work. Robert Huttinger also built his own workflow, which served as a basis for my own.

First, here’s the AppleScript code that wraps around the curl command to shorten the URL:

on run (input)
 
  set login to "YourUserNameHere" as string
  set apiKey to "YourAPIKeyHere" as string
  set input to (input as string)
 
  ignoring case
  if (((characters 1 thru 4 of input) as string) is not equal to "http") then
    beep
    return
  else
    set curlCmd to "curl --stderr /dev/null \"http://api.bit.ly/v3/shorten?login=" & login & "&apiKey=" & apiKey & "&longURL=" & input & "&format=txt\""
    set shortURL to (do shell script curlCmd)
    return shortURL
  end if
  end ignoring
 
end run

Be careful with the line starting “set curlCmd…”; it’s wrapped above and you’ll need to properly escape the quotes with backslashes, as above, in order for it to work properly. You’ll clearly want to replace “YourUserNameHere” and “YourAPIKeyHere” with the appropriate values from your bit.ly account.

A text version of the script is available for download here.

Once you have the AppleScript written, you can then embed it into an Automator workflow. I won’t bother to explain what Automator is or how it works here; there are numerous resources available to help in that regard. Rather, I’ll just simply say that you only need to assemble the Run AppleScript, (optionally) the Show Growl Notification, and the Copy to Clipboard actions as shown in this screenshot. In my case, I’m using Automator to create a service that accepts text from any application; this means I need only select the text of a URL I’d like to have shorten and then invoke this service. After a brief pause, a Growl notification pops up and the shortened bit.ly URL is on my clipboard, ready to be pasted into whatever application I need. And, since it’s now a Mac OS X Service, you can bind it to a hotkey for even easier access.

Again, credit goes to the others who have blazed this trail ahead of me; I’m merely posted my version here in the event it is useful to others. Comments, feedback, and suggestions are always welcome.

Tags: ,

I was experimenting tonight with some ways to add more automation to my workflow. One process that is (relatively) time-consuming is the process of generating short URLs via bit.ly. This site had a brief tutorial on how to use curl to do it, but the shortened link didn’t show up in my link history. Upon browsing the bit.ly API documentation, though, I was able to fairly quickly piece together a command line that will shorten a URL via bit.ly and put the shortened URL in the user’s link history.

Note that in order to use this command, you’ll need your bit.ly API key. Your API key is easily accessed from your account settings page.

Here’s the command I tested (works on Mac OS X 10.6.4):

curl 'http://api.bit.ly/v3/shorten?login=<bit.ly login>&apiKey=<bit.ly API key>&longURL=<Long URL to be shortened>&format=txt'

In order to make this truly usable, there are some additional things that have to happen. The long URL has to be properly encoded, as it can’t have any spaces or special characters, for example. But otherwise, this command is a workable solution to shortening a URL from the command line. All I need now is a small AppleScript around this and then I’ll have a URL shortening script I can bind to a hotkey. That should help speed the process up!

Tags: , ,

Revisiting Evernote

About two years ago, I took a look at Evernote (here’s the main Evernote web site), which at that time was still in beta. While I was intrigued with the idea of Evernote, at that time I struggled with getting data into Evernote. The Web Clipper didn’t seem intuitive to me, and I wrestled with how best to use Evernote within my fledgling productivity system.

Since that time, I settled on the use of OmniFocus for organizing commitments and Yojimbo for organizing information (more on how I use these two applications is found in this update on my Getting Things Done setup). Using AppleScript as the glue between the consumption, organization, and creation layers has been tremendously useful for me. While I still have plenty of room to grow and improve, I feel like the system I’ve built really helps me stay productive, in part because it’s transparent (i.e., it doesn’t get in my way).

When I first evaluated Evernote, I wasn’t too familiar with AppleScript and I believe that the Mac version of Evernote had very little or no AppleScript support. With recent releases of the Mac Evernote application, their AppleScript support has improved dramatically, and so I thought I should revisit Evernote. Now that I could use AppleScript to help ease the process of getting information into Evernote, perhaps it would be a good fit into my workflow. In addition, I’d gain the ability to have access to my notes from my Mac, my iPhone, my iPad, and any web browser. Just as OmniFocus is available from any of my devices, so too would my information be available.

I was right about the AppleScript part; I was able to relatively easily adapt the scripts I’d written for Yojimbo to work with Evernote. Combined with FastScripts, this made capturing a URL from Camino or NetNewsWire into Evernote as simple as pressing a hotkey. (If anyone is interested in the scripts themselves, let me know and I’ll make them available.)

Unfortunately, I now find that my system is no longer as transparent as it used to be. The system is now getting in my way. I’ll grant you that some of this could be due to the switch from Yojimbo to Evernote. It takes time to grow accustomed to any change, and this is no different. The question then becomes: is it worth the effort to sustain the change? What benefits will switching to Evernote get me, and what challenges will it introduce? I’ve done a little bit of thinking about this, and here’s where things stand currently:

  • First, everything in Evernote is a note. This means that I have to take extra steps to separate out different data types in the event that I need to view or act upon only certain data types. Yojimbo, on the other hand, has separate data types for notes, bookmarks, images, etc. Is this a big deal? Not a huge deal, but it does introduce a small amount of additional work if I stick with Evernote.
  • Second, Evernote’s UI is terribly clunky compared to Yojimbo’s. Anytime you do anything with tags, the Tags area in the left-hand pane of the Evernote window expands—even if you don’t want it to. Searching for items by tag means using Evernote’s extended search syntax, which is buried at the end of the user’s guide (you’ll need to use something like “tag:ToRead” to find all items tagged “ToRead”). Evernote lacks Tag Explorer-like functionality. There’s no Smart Collections (or Smart Folders) functionality in Evernote, although you can use saved searches; unfortunately, Evernote doesn’t provide a UI for creating saved searches. All in all, it makes working with Evernote more difficult than performing a comparable task in Yojimbo (in my opinion).
  • Third, for Evernote to treat everything as a note, it’s note functionality is surprisingly simplistic. If you use fonts and formatting in your Evernote notes, the iOS versions of Evernote can’t edit them. (To be fair, this is an Apple iOS limitation.) Even when I attempted to convert notes back to the equivalent of plain text using the Simplify Formatting command, some of the formatting remained, and there did not appear to be any way of correcting that behavior. Even more irritating, converting these notes back to plain text equivalent wasn’t detected as a change by the Evernote client, which meant that the updated note wasn’t synced up Evernote’s online service. In fact, unless I actually edited the note (for example, by adding a character and then removing the character), Evernote wouldn’t even save the changes to plain text equivalence.
  • Yojimbo lacks the ability to sync data across multiple platforms. Heck, Yojimbo is a Mac-only application—it doesn’t have apps for any other platforms, much less the ability to synchronize the data. Keeping data in sync across devices and platforms is, of course, one of Evernote’s key features. So, how much is the ability to sync and access data across multiple applications worth? How much of an advantage will this truly offer? I’ve seen the benefits of having my commitments available on multiple platforms via OmniFocus, and I’m seeing the benefits of keeping my RSS feed synchronized via Google Reader (using NetNewsWire on my Mac and NewsRack on my iPhone and iPad). Will the same benefit hold true for notes?

All things considered, it seems as if I’m finding one potential advantage to Evernote (syncing data across devices) and three known drawbacks (lack of multiple data types, note functionality issues, and an unintuitive user interface). I just can’t decide if having information like URLs and brief notes available across devices is really as worthwhile as it’s made out to be. I’d love to hear feedback from readers about their viewpoint—has Evernote syncing really been useful? Speak up in the comments below. Thanks!

Tags: , ,

No, I haven’t found it yet. Sorry, I hope I didn’t get your hopes up with that headline. I’ve been testing a bunch of different Mac clients for Twitter, and I just can’t seem to find the client that has the right mix of features. So, in the hopes that some of the developers of these various applications are reading, here are some of the applications I’ve tested and what I like about each one. Now I just need someone to take all these features and roll them into the perfect Mac Twitter client…

  • Lounge: The Mac beta of Lounge takes the cake for the most complete integration with Twitter. From within this application, you can view user details, see who’s following who and who’s being followed, view another user’s timeline, view Twitter search results, private messages, retweets, view the tweet in a Web browser, copy the tweet’s URL…well, you get the picture. So what’s wrong with Lounge? Primarily speed. I’d also appreciate the ability to customize the display a little bit more than I can currently. Granted, Lounge is still early (0.4.1) beta, so I guess we have to cut them a little bit of slack.
  • NatsuLion: NatsuLion feels the most Cocoa-native here, with full support for transparency (which is a feature I like). I can adjust the display quite extensively, and it has a minimal desktop footprint. There are some trade-offs for that minimal desktop footprint, though, and NatsuLion seems the most susceptible to Twitter brown-outs and outages. Sometimes it will just…not work.
  • Canary: This is a brand-new app I just found earlier today. My #1 complaint about Canary is the display of the tweets—it’s just awful. They need a more streamlined and dynamic display of the timeline, like Lounge and Nambu (see below). Otherwise, I absolutely love the solid integration with a variety of URL shorteners—including credentials for those URL shortening services. Right now, though, Canary is seriously buggy. Switching between views sometimes doesn’t work, and applying a filter then removing the filter causes problems as well. Again, this is an early beta (Beta 2), so I suppose some bugginess is to be expected.
  • Nambu: Nambu is supposed to be more than just a Twitter client, but in current builds only the Twitter functionality works. It’s a pretty decent client, fairly quick and responsive. I like that it automatically contacts URL shorteners to expand out the full URL; this lets you know where you’re headed before you click on it (a good thing these days given all the web exploits that are available). It’s supposed to offer integration with tr.im, a URL shortening service, but it doesn’t really work. It will shorten the URL but won’t use your credentials (in fact, it won’t even save your credentials between launches).
  • Twitterific: It wouldn’t be complete to talk about Twitter clients for the Mac without talking about Twitterific. The only thing I like about Twitterific is the AppleScript support. Otherwise, I absolutely cannot stand the user interface. I just don’t like it. Some people swear by it; it’s just not for me.
  • Bluebird: Bluebird is another application that’s just popped up in the last few days. The first time I tried it, it wouldn’t even launch (said that themes were missing); the second time I tried it, it worked. The themes are supposedly the big thing; you can use standard CSS/XHTML to style the appearance of the tweet timeline. Otherwise, it’s a very early build (Beta 1, I think) and it shows.
  • EventBox: I received a free build of EventBox from MacHeist, but I couldn’t get it to work. It would never even connect to Twitter.

So that’s where things stand. What would the perfect Mac Twitter client possess?

  • The extensive Twitter integration of Lounge
  • The smooth UI of NatsuLion blended with Lounge and Nambu
  • The URL shortening services integration of Canary
  • The AppleScript support of Twitterific

That would, in my opinion, create the perfect Mac Twitter client.

Tags: ,

I’d like to welcome our second sponsor, Hyper9! As you know, Hyper9 recently launched their flagship search-based administration product. I’m excited to be able to partner with them and I appreciate their sponsorship of the site.

If there are any other companies out there that may be interested in sponsoring the site, I have a few spots still remaining. Feel free to contact me if you want more information.

Tags: , , ,

Site Maintenance

The site will be going down for site maintenance on Monday, March 23, at approximately 11PM MST (GMT-7). The site could be unavailable for as much as 2 hours. According to my hosting company (Bluehost), the hardware on which the site is running is getting upgraded. We should see an improvement in performance as a result of the upgrade.

I apologize in advance for any inconvenience.

Tags: , ,

« Older entries