You are currently browsing articles tagged Encryption.

Vendor Meetings at VMworld 2013

This year at VMworld, I wasn’t in any of the breakout sessions because employees aren’t allowed to register for breakout sessions in advance; we have to wait in the standby line to see if we can get in at the last minute. So, I decided to meet with some vendors that seemed interesting and get some additional information on their products. Here’s the write-up on some of the vendor meetings I’ve attended while in San Francisco.

Jeda Networks

I’ve mentioned Jeda Networks before (see here), and I was pretty excited to have the opportunity to sit down with a couple of guys from Jeda to get more details on what they’re doing. Jeda Networks describes themselves as a “software-defined storage networking” company. Given my previous role at EMC (involved in storage) and my current role at VMware focused on network virtualization (which encompasses SDN), I was quite curious.

Basically, what Jeda Networks does is create a software-based FCoE overlay on an existing Ethernet network. Jeda accomplishes this by actually programming the physical Ethernet switches (they have a series of plug-ins for the various vendors and product lines; adding a new switch just means adding a new plug-in). In the future, when OpenFlow or its derivatives become more ubiquitous, I could see using those control plane technologies to accomplish the same task. It’s a fascinating idea, though I question how valuable a software-based FCoE overlay is in a world that seems to be rapidly moving everything to IP. Even so, I’m going to keep an eye on Jeda to see how things progress.

Diablo Technologies

Diablo was a new company to me; I hadn’t heard of them before their PR firm contacted me about a meeting while at VMworld. Diablo has created what they call Memory Channel Storage, which puts NAND flash on a DIMM. Basically, it makes high-capacity flash storage accessible via the CPU’s memory bus. To take advantage of high-capacity flash in the memory bus, Diablo supplies drivers for all the major operating systems (OSes), including ESXi, and what this driver does is modify the way that page swaps are handled. Instead of page swaps moving data from memory to disk—as would be the case in a traditional virtual memory system—the page swaps happen between DRAM on the memory bus and Diablo’s flash on the memory bus. This means that page swaps are extremely fast (on the level of microseconds, not the milliseconds typically seen with disks).

To use the extra capacity, then, administrators must essentially “overcommit” their hosts. Say your hosts had 64GB of (traditional) RAM installed, but 2TB of Diablo’s DIMM-based flash installed. You’d then allocate 2TB of memory to VMs, and the hypervisor would swap pages at extremely high speed between the DRAM and the DIMM-based flash. At that point, the system DRAM almost looks like another level of cache.

This “overcommitment” technique could have some negative effects on existing monitoring systems that are unaware of the underlying hardware configuration. Memory utilization would essentially run at 100% constantly, though the speed of the DIMM-based flash on the memory bus would mean you wouldn’t take a performance hit.

In the future, Diablo is looking for ways to make their DIMM-based flash appear to an OS as addressable memory, so that the OS would just see 3.2TB (or whatever) of RAM, and access it accordingly. There are a number of technical challenges there, not the least of which is ensuring proper latency and performance characteristics. If they can resolve these technical challenges, we could be looking at a very different landscape in the near future. Consider the effects of cost-effective servers with 3TB (or more) of RAM installed. What effect might that have on modern data centers?


HyTrust is a company with whom I’ve been in contact for several years now (since early 2009). Although HyTrust has been profitable for some time now, they recently announced a new round of funding intended to help accelerate their growth (though they’re already on track to quadruple sales this year). I chatted with Eric Chiu, President and founder of HyTrust, and we talked about a number of areas. I was interested to learn that HyTrust had officially productized a proof-of-concept from 2010 leveraging Intel’s TPM/TXT functionality to perform attestation of ESXi hypervisors (this basically means that HyTrust can verify the integrity of the hypervisor as a trusted platform). They also recently introduced “two man” support; that is, support for actions to be approved or denied by a second party. For example, an administrator might try to delete a VM, but that deletion would need to be approved by a second party before it is allowed to proceed. HyTrust also continues to explore other integration points with related technologies, such as OpenStack, NSX, physical networking gear, and converged infrastructure. Be sure to keep an eye on HyTrust—I think they’re going to be doing some pretty cool things in the near future.


Vormetric interested me because they offer a data encryption product, and I was interested to see how—if at all—they integrated with VMware vSphere. It turns out they don’t integrate with vSphere at all, as their product is really more tightly integrated at the OS level. For example, their product runs natively as an agent/daemon/service on various UNIX platforms, various Linux distributions, and all recent versions of Windows Server. This gives them very fine-grained control over data access. Given their focus is on “protecting the data,” this makes sense. Vormetric also offers a few related products, like a key management solution and a certificate management solution.


SimpliVity is one of a number of vendors touting “hyperconvergence,” which—as far as I can tell—basically means putting storage and compute together on the same node. (If there is a better definition, please let me know.) In that regard, they could be considered similar to Nutanix. I chatted with one of the designers of the SimpliVity OmniCube. SimpliVity leverages VM-based storage controllers that leverage VMDirectPath for accelerated access to the underlying hardware, and present that underlying hardware back to the ESXi nodes as NFS storage. Their file system—developed during the 3 years they spent in stealth mode—abstracts away the hardware so that adding OmniCubes means adding both capacity and I/O (as well as compute). They use inline deduplication not only to reduce storage capacity, but especially to avoid having to write I/Os to the storage in the first place. (Capacity isn’t usually the issue; I/Os are typically the issue.) SimpliVity’s file system enables fast backups and fast clones; although they didn’t elaborate, I would assume they are using a pointer-based system (perhaps even an optimized content-addressed storage [CAS] model) that keeps them from having to copy large amounts of data around the system. This is what enables them to do global deduplication, backups from any system to any other system, and restores from any system to any other system (system here referring to an OmniCube).

In any case, SimpliVity looks very interesting due to its feature set. It will be interesting to see how they develop and mature.

SanDisk FlashSoft

This was probably one of the more fascinating meetings I had at the conference. SanDisk FlashSoft is a flash-based caching product that supports various OSes, including an in-kernel driver for ESXi. What made this product interesting was that SanDisk brought out one of the key architects behind the solution, who went through their design philosophy and the decisions they’d made in their architecture in great detail. It was a highly entertaining discussion.

More than just entertaining, though, it was really informative. FlashSoft aims to keep their caching layer as full of dirty data as possible, rather than seeking to flush dirty data right away. The advantage this offers is that if another change to that data comes, FlashSoft can discard the earlier change and only keep the latest change—thus eliminating I/Os to the back-end disks entirely. Further, by keeping as much data in their caching layer as possible, FlashSoft has a better ability to coalesce I/Os to the back-end, further reducing the I/O load. FlashSoft supports both write-through and write-back models, and leverages a cache coherency/consistency model that allows them to support write-back with VM migration without having to leverage the network (and without having to incur the CPU overhead that comes with copying data across the network). I very much enjoyed learning more about FlashSoft’s product and architecture. It’s just a shame that I don’t have any SSDs in my home lab that would benefit from FlashSoft.


My last meeting of the week was with a couple folks from SwiftStack. We sat down to chat about Swift, SwiftStack, and object storage, and discussed how they are seeing the adoption of Swift in lots of areas—not just with OpenStack, either. That seems to be a pretty common misconception (that OpenStack is required to use Swift). SwiftStack is working on some nice enhancements to Swift that hopefully will show up soon, including erasure coding support and greater policy support.

Summary and Wrap-Up

I really appreciate the time that each company took to meet with me and share the details of their particular solution. One key takeaway for me was that there is still lots of room for innovation. Very cool stuff is ahead of us—it’s an exciting time to be in technology!

Tags: , , , , , ,

Welcome to Technology Short Take #29! This is another installation in my irregularly-published series of links, thoughts, rants, and raves across various data center-related fields of technology. As always, I hope you find something useful here.


  • Who out there has played around with Mininet yet? Looks like this is another tool I need to add to my toolbox as I continue to explore networking technologies like OpenFlow, Open vSwitch, and others.
  • William Lam has a recent post on some useful VXLAN commands found in ESXCLI with vSphere 5.1. I’m a CLI fan, so I like this sort of stuff.
  • I still have a lot to learn about OpenFlow and networking, but this article from June of last year (it appears to have been written by Ivan Pepelnjak) discusses some of the potential scalability concerns around early versions of the OpenFlow protocol. In particular, the use of OpenFlow to perform granular per-flow control when there are thousands (or maybe only hundreds) of flows presents a scalability challenge (for now, at least). In my mind, this isn’t an indictment of OpenFlow, but rather an indictment of the way that OpenFlow is being used. I think that’s the point Ivan tried to make as well—it’s the architecture and how OpenFlow is used that makes a difference. (Is that a reasonable summary, Ivan?)
  • Brad Hedlund (who will be my co-worker starting on 2/11) created a great explanation of network virtualization that clearly breaks down the components and explains their purpose and function. Great job, Brad.
  • One of the things I like about Open vSwitch (OVS) is that it is so incredibly versatile. Case in point: here’s a post on using OVS to connect LXC containers running on different hosts via GRE tunnels. Handy!


  • Cisco UCS is pretty cool in that it makes automation of compute hardware easier through such abstractions as server profiles. Now, you can also automate UCS with Chef. I traded a few tweets with some Puppet folks, and they indicated they’re looking at this as well.
  • Speaking of Puppet and hardware, I also saw a mention on Twitter about a Puppet module that will manage the configuration of a NetApp filer. Does anyone have a URL with more information on that?
  • Continuing the thread on configuration management systems running on non-compute hardware (I suppose this shouldn’t be under the “Servers/Hardware” section any longer!), I also found references to running CFEngine on network apliances and running Chef on Arista switches. That’s kind of cool. What kind of coolness would result from even greater integration between an SDN controller and a declarative configuration management tool? Hmmm…


  • Want full-disk encryption in Ubuntu, using AES-XTS-PLAIN64? Here’s a detailed write-up on how to do it.
  • In posts and talks I’ve given about personal productivity, I’ve spoken about the need to minimize “friction,” that unspoken drag that makes certain tasks or workflows more difficult and harder to adopt. Tal Klein has a great post on how friction comes into play with security as well.

Cloud Computing/Cloud Management

  • If you, like me, are constantly on the search for more quality information on OpenStack and its components, then you’ll probably find this post on getting Cinder up and running to be helpful. (I did, at least.)
  • Mirantis—recently the recipient of $10 million in funding from various sources—posted a write-up in late November 2012 on troubleshooting some DNS and DHCP service configuration issues in OpenStack Nova. The post is a bit specific to work Mirantis did in integrating an InfoBlox appliance into OpenStack, but might be useful in other situation as well.
  • I found this article on Packstack, a tool used to transform Fedora 17/18, CentOS 6, or RHEL 6 servers into a working OpenStack deployment (Folsom). It seems to me that lots of people understand that getting an OpenStack cloud up and running is a bit more difficult than it should be, and are therefore focusing efforts on making it easier.
  • DevStack is another proof point of the effort going into make it easier to get OpenStack up and running, although the focus for DevStack is on single-host development environments (typically virtual themselves). Here’s one write-up on DevStack; here’s another one by Cody Bunch, and yet another one by the inimitable Brent Salisbury.

Operating Systems/Applications

  • If you’re interested in learning Puppet, there are a great many resources out there; in fact, I’ve already mentioned many of them in previous posts. I recently came across these Example42 Puppet Tutorials. I haven’t had the chance to review them myself yet, but it looks like they might be a useful resource as well.
  • Speaking of Puppet, the puppet-lint tool is very handy for ensuring that your Puppet manifest syntax is correct and follows the style guidelines. The tool has recently been updated to help fix issues as well. Read here for more information.


  • Greg Schulz (aka StorageIO) has a couple of VMware storage tips posts you might find useful reading. Part 1 is here, part 2 is here. Enjoy!
  • Amar Kapadia suggests that adding LTFS to Swift might create an offering that could give AWS Glacier a real run for the money.
  • Gluster interests me. Perhaps it shouldn’t, but it does. For example, the idea of hosting VMs on Gluster (similar to the setup described here) seems quite interesting, and the work being done to integrate KVM/QEMU with Gluster also looks promising. If I can ever get my home lab into the right shape, I’m going to do some testing with this. Anyone done anything with Gluster?
  • Erik Smith has a very informative write-up on why FIP snooping is important when using FCoE.
  • Via this post on ten useful OpenStack Swift features, I found this page on how to build the “Swift All in One,” a useful VM for learning all about Swift.


  • There’s no GUI for it, but it’s kind of cool that you can indeed create VM anti-affinity rules in Hyper-V using PowerShell. This is another example of how Hyper-V continues to get more competent. Ignore Microsoft and Hyper-V at your own risk…
  • Frank Denneman takes a quick look at using user-defined NetIOC network resource pools to isolate and protect IP-based storage traffic from within the guest (i.e., using NFS or iSCSI from within the guest OS, not through the VMkernel). Naturally, this technique could be used to “protect” or “enhance” other types of important traffic flows to/from your guest OS instances as well.
  • Andre Leibovici has a brief write-up on the PowerShell module for the Nicira Network Virtualization Platform (NVP). Interesting stuff…
  • This write-up by Falko Timme on using BoxGrinder to create virtual appliances for KVM was interesting. I might have to take a look at BoxGrinder and see what it’s all about.
  • In case you hadn’t heard, OVF 2.0 has been announced/released by the DMTF. Winston Bumpus of VMware’s Office of the CTO has more information in this post. I also found the OVF 2.0 frequently asked questions (FAQs) to be helpful. Of course, the real question is how long it will be before vendors add support for OVF 2.0, and how extensive that support actually is.

And that’s it for this time around! Feel free to share your thoughts, suggestions, clarifications, or corrections in the comments below. I encourage your feedback, and thanks for reading.

Tags: , , , , , , , , , , , , , , ,

A short while ago, I talked about how to add client-side encryption to Dropbox using EncFS. In that post, I suggested using BoxCryptor to access your encrypted files. A short time later, though, I uncovered a potential issue with (what I thought to be) BoxCryptor. I have an update on that issue.

In case you haven’t read the comments to the original BoxCryptor-Markdown article, it turns out that the problem with using Markdown files with BoxCryptor doesn’t lie with BoxCryptor—it lies with Byword, the Markdown editor I was using on iOS. Robert, founder of BoxCryptor, suggested that Byword doesn’t properly register the necessary handlers for Markdown files, and that’s why BoxCryptor can’t preview the files or use “Open In…” functionality. On his suggestion, I tried Textastic.

It works flawlessly. I can preview Markdown files in the iOS BoxCryptor client, then use “Open In…” to send the Markdown files to Textastic for editing. I can even create new Markdown files in Textastic and then send them to BoxCryptor for encrypted upload to Dropbox (where I can, quite naturally, open them using my EncFS filesystem on my Mac systems). Very nice!

If you are thinking about using EncFS with Dropbox and using BoxCyrptor to access those files from iOS, and those files are text-based files (like Markdown, plain text, HTML, and similar file formats), I highly recommend Textastic.

Tags: , , , ,

About a week ago, I published an article showing you how to use EncFS and BoxCryptor to provide client-side encryption of Dropbox data. After working with this configuration for a while, I’ve run across a problem (at least, a problem for me—it might not be a problem for you). The problem lies on the iPad end of things.

If you haven’t read the earlier post, the basic gist of the idea is to use EncFS—an open source encrypting file system—and OSXFUSE to provide file-level encryption of Dropbox data on your OS X system. This is client-side encryption where you are in the control of the encryption keys. To access these encrypted files from your iPad, you’ll use the BoxCryptor iOS client, which is compatible with EncFS and decrypts the files.

Sounds great, right? Well, it is…mostly. The problem arises from the way that the iPad handles files. BoxCryptor uses the built-in document preview functionality of iOS, which in turn allows you to access the iPad’s “Open In…” functionality. The only way to get to the “Open In…” menu is to first preview the document using the iOS document preview feature. Unfortunately, the iOS document preview functionality doesn’t recognize a number of files and file types. Most notably for me, it doesn’t recognize Markdown files (I’ve tried several different file extensions and none of them seem to work). Since the preview feature doesn’t recognize Markdown, then I can’t get to “Open In…” to open the documents in Byword (an iOS Markdown editor), and so I’m essentially unable to access my content.

To see if this was an iOS-wide problem or a problem limited to BoxCryptor, I tested accessing some non-encrypted files using the Dropbox iOS client. The Dropbox client will, at least, render Markdown and OPML files as plain text. The Dropbox iOS client still does not, unfortunately, know how to get the Markdown files into Byword. I even tried a MindManager mind map; the Dropbox client couldn’t preview it (not surprisingly), but it did give me the option to open it in the iOS version of MindManager. The BoxCryptor client also worked with a mind map, but refuses to work with plain text-based files like Markdown and OPML.

Given that I create the vast majority of my content in Markdown, this is a problem. If anyone has any suggestions, I’d love to hear them in the comments. Otherwise, I’ll post more here as soon as I learn more or find a workaround.

Tags: , , , , ,

Lots of folks like using Dropbox, the ubiquitous store-and-sync cloud storage service; I am among them. However, concerns over the privacy and security of my data have kept me from using Dropbox for some projects. To help address that, I looked around to find an open, interoperable way of adding an extra layer of encryption onto my data. What I found is described in this post, and it involves using the open source EncFS and OSXFUSE projects along with an application from BoxCryptor to provide real-time, client-side AES-256 encryption.


First, some background why I went down this path. Of all the various cloud-based services out there, I’m not sure there is a service that I rely upon more than Dropbox. The Dropbox team has done a great job of creating an almost seamlessly integrated product that makes it much easier to keep your files accessible across locations and devices.

Of course, Dropbox is not without its flaws, and security and privacy are considered among the prime concerns. Dropbox states they use server-side encryption to protect your data on the Amazon S3 infrastructure, but Dropbox also controls those server-side encryption keys. Many individuals, myself among them, would prefer client-side encryption with control over our own encryption keys.

So, a fair number of companies have sprung up offering ways to help fix this. One of these is BoxCryptor, who offers an application for Windows, Mac, iOS, and Android that performs client-side encryption. From the Mac OS X perspective, BoxCryptor’s solution is, as far as I know, built on top of some fundamental building blocks:

  • The open source OSXFUSE project, which is a port of FUSE for Mac OS X
  • A Mac port of the open source EncFS FUSE filesystem

I would imagine that ports of these components for other operating systems are used in their other platforms, but I don’t know this for certain. Regardless, it’s possible to use BoxCryptor’s application to get client-side encryption across a variety of platforms. For those who want a quick, easy, simple solution, my recommendation is to use BoxCryptor. However, if you want a bit more flexibility, then using the individual components can give you the same effect. I chose to use the individual components, more for my own understanding than anything else, and that’s what is described in this post.

What You’ll Need

This post was written from the perspective of getting this solution running on Mac OS X; if you’re using a different operating system, the specifics will quite naturally be different (although the broad concepts are still applicable).

There are four main components you’ll need:

  • OSXFUSE: This is a port of FUSE to OS X, and is one of a couple of successors to the now-defunct MacFUSE project. OSXFUSE is available to download here.
  • Macfusion: Macfusion is a GUI to help simplify and automate the mounting of filesystems. While it’s not strictly necessary, it does make things a lot easier. Macfusion can be downloaded here.
  • EncFS: You’ll need a version of EncFS for Mac OS X. There are a variety of ways to get it; I used an installer actually made available by BoxCryptor here.
  • EncFS plugin for Macfusion: This is what enables Macfusion to mount or unmount EncFS filesystems, and is actually included in the EncFS installer above. You can also download the plugin here.

Setting Things Up

Once you have all the components you need, then you’re ready to start installing.

  1. First, install OSXFUSE. When installing OSXFUSE, be sure to select to install the MacFUSE Compatibility Layer. The OSXFUSE installer recommends rebooting after the installation, but I waited until I’d finished installing all the components.

  2. Once OSXFUSE is installed, install Macfusion. Macfusion is distributed as a ZIP file; simply unzip the file and move it to the location of your choice. I installed it to /Applications.

  3. Next, run the EncFS installer. During the installation, select to install only EncFS and the EncFS plugin for Macfusion. Do not install any of the other components. I rebooted here.

  4. You’ll need both a mount point as well as a directory to store the raw, encrypted data. Since the raw, encrypted data is intended to be synchronized via Dropbox, you’ll want to create the encrypted directory in the Dropbox hierarchy. I chose to use ~/Dropbox/Secure. For the mount point, I chose to use ~/.Secure. You can obviously modify both of these directories to better suit your own needs or preferences.

  5. Once you have all the components installed and the mount point and encrypted directories created, you’re ready to actually create the encrypted filesystem. Run the command encfs ~/Dropbox/Secure ~/.Secure. The encfs program will run through some questions; select “x” for Expert mode and configure it according to the guidelines described in this support article. When prompted for a passphrase, be sure to enter an appropriately complex passphrase—and make sure you remember it (you’ll need it later).

  6. When encfs finishes running, it will mount an encrypted volume on your desktop. It will have an odd name, but you won’t be able to change it. Go ahead and eject (unmount) this volume; we’ll remount it again shortly using Macfusion. Note that you might see some Dropbox activity here.

  7. Launch Macfusion, then re-add the encrypted filesystem created in step 5; you’ll need to supply the same passphrase you entered earlier. Here in Macfusion you’ll be able to specify a name for the encrypted filesystem and supply a custom icon as well. Mount the encrypted filesystem to be sure that everything is working as expected.

That’s it—any files you now copy into the encrypted filesystem—which is represented by an external drive on your Desktop—will be encrypted using AES-256 and then synchronized to Dropbox. Cool, huh?

Adding Another Computer

I have two Macs in my office (my 13″ MacBook Pro and my Mac Pro), so I had to repeat the process on the second Mac so that it could read the encrypted files. If you have more than one computer, you’ll need to do the same. Simply go through steps 1 through 5. In step 5, though, it will only prompt for the passphrase. You can even skip steps 5 and 6 to go straight to 7. As long as you have the passphrase for the encrypted filesystem, adding access for additional Dropbox-linked computers should be a piece of cake.

Adding Access from iOS

This is where BoxCryptor comes back into play again. Install the BoxCryptor app onto your device, then link it to your Dropbox account and select the directory within Dropbox where the raw, encrypted data is found. As long as you followed the configuration guidelines here, BoxCryptor should be able to decrypt the encrypted filesystem created with EncFS.

Following these instructions, you’ll gain a way to add AES-256 encryption to your Dropbox files (or a subset of your Dropbox files) while still maintaining access to those files from just about any location across a variety of devices.

If anyone has any questions or clarifications about what I’ve posted here, please speak up in the comments below. All courteous comments are welcome!

Tags: , , ,

EMC Data Domain made a couple of significant product announcements today. First, EMC announced the EMC Data Domain Global Deduplication Array (GDA). Second, EMC also announced a doubling of the logical capacity for the high-end Data Domain DD880.

The full press release for the GDA announcement is available here. The “speeds and feeds” of the new multi-controller architecture of the GDA include throughput of up to 12.8 TB/hour, up to 270 concurrent backup operations, and up to 14.2 PB (yes, petabytes) of logical storage capacity. That’s pretty impressive, if you ask me.

As for the doubling of capacity on the DD880, the full press release has all the details. In addition to greater capacity on the DD880 (up to 7.1 PB of logical storage capacity), the announcement also unveiled Data Domain encryption. This represents the industry’s first encryption of data at rest on deduplicated storage. Encryption occurs inline using administrator-selected 128-bit or 256-bit Advanced Encryption Storage (AES) algorithms.

Take a look at the full press releases for both these announcements for more information.

Tags: , , ,

Mac FTP/SFTP Clients

I’d gotten turned on to Cyberduck as my primary FTP/SFTP client after really getting into Growl, the global notification system for Mac OS X.  The application I was using at the time, Fugu, didn’t have Growl support.  Cyberduck did, so I switched, and I’ve been using Cyberduck ever since.

I like the Cyberduck interface; it seems to make sense to me and I’ve never really run into any major compatibility issues (seems like I ran into one minor problem after an upgrade of OpenSSH on one of my servers, but that problem was quickly resolved as I recall).  The Growl support is, of course, excellent, and Cyberduck also offers a veritable laundry list of features—integrated support for Spotlight, a Dashboard widget, Keychain support, multiple windows, etc.  It even comes as a Universal binary.  (The features are far too many to list here; refer to the Cyberduck web site for complete information.)

Sound like a great application?  It is—if you don’t need to transfer large files.  Since I started out just using Cyberduck to move some small web pages back and forth to my web server, these were mostly small files and I didn’t really notice any performance hit.  Sure, it seemed a bit slower than command-line SFTP or SCP and it seemed to be a bit of a memory hog, but I figured it was just GUI overhead and thought no more about it.  For what I was doing at the time, it worked fine.

Recently, though, I’ve been needing to transfer much larger files to and from some SFTP servers on my local LAN.  How large?  ISO images ranging from 300MB to 600MB, sometimes multiple ISO images at a time.  Generally, the file transfers will complete, but they are just plain slow.  Almost painfully slow.  So slow, in fact, that I’ve been driven to looking at alternatives.

I’m currently evaluating Interarchy.  While the interface is a bit quirky (although I suppose that is due to being predisposed to an interface like Cyberduck’s), the performance is astounding.  I can transfer multiple ISO images in minutes, not hours as with Cyberduck.  It’s almost unbelievable.

I have yet to decide whether I’ll just buy Interarchy or if I’ll evaluate two other potential candidates, Transmit and Fetch.  Both applications have gotten good reviews, but—being the UI stickler that I am—neither of them sports as modern a UI as Interarchy (I really like the unified toolbar look).

My primary complaint with Interarchy is the price.  Sixty bucks seems a bit high for this type of application; both Transmit and Fetch (other options to replace Cyberduck) charge about half that.  Of course, the other applications don’t offer the same set of functionality that Interarchy offers, either.  But will I actually use that functionality?  Amazon S3 support is great, but will I really use Amazon S3?  I don’t have a WebDAV server, so is it worth paying for WebDAV support?  Is it worth paying for network tools that duplicate functionality already in the base operating system?

What do you think?  If you are a Fetch, Transmit, Interarchy, Fugu, or even Cyberduck user, please post in the comments and tell me what you think.

Tags: , , ,

The OpenSSL toolkit is a veritable Swiss Army knife of SSL functionality. Among the many, many things that can be done using OpenSSL is converting SSL certificates between formats. This is particularly helpful in a heterogeneous environment where different platforms may require SSL certificates to be in different formats.

A mixed Windows-Linux shop is one excellent example. Windows typically requires certificates in PFX format; Linux, on the other hand, typically needs PEM format. (See this X.509 article for more information on the PFX and PEM formats.) Using the OpenSSL toolkit, we can pretty easily convert certificates from PFX to PEM. Here’s how.

Before we begin, we’ll need to make sure we have the certificate in PFX format with the private key. In organizations that use the Windows Certificate Services as a CA, we can use the Certificates MMC snap-in to export the certificate and the corresponding private key to a PFX file. During this process, we’ll be prompted for a passphrase; make note of it, as we’ll need it later in the process.

With our PFX file in hand, we start the conversion process:

  1. At a command-line prompt, type openssl pkcs12 -in pfxfilename.pfx -out tempfile.pem. This will convert the PFX file to a PEM file. The OpenSSL toolkit will prompt for the import passphrase; this will be the passphrase for the PFX file when the certificate and private key were exported (as mentioned above). OpenSSL will prompt for a new PEM passphrase; be sure to make note of this information as well.
  2. Using a text editor, split the PEM file into two separate files, one containing the certificate and one containing the encrypted private key. Remove all extra text from these files outside the lines with the dashes.
  3. Because many Linux-based applications will need the private key decrypted (or they will prompt for the passphrase during service start), we’ll decrypt the private key. To decrypt the private key, use the command openssl rsa -in encryptedkey -out decryptedkey (where encryptedkey is the file containing the RSA private key, as separated above, and decryptedkey is the file that will contain the decrypted RSA private key). The OpenSSL toolkit will prompt for the RSA key passphrase; this will be the PEM passphrase we specified when we first converted the certificate to PEM format above.
  4. If the application can use the certificate and the key in separate files, then we’re finished. If we need to put them back into the same file, then use the command cat decryptedkey certificatefile > finalfile.pem (on Mac OS X or Linux) or the command copy /b decryptedkey+certificatefile finalfile.pem (on Windows). This will combine the certificate and the decrypted private key into a single file. Using a text editor, add a blank line between the decrypted RSA private key and the certificate, and a blank line after the end of the certificate.

The final file is now ready for use with any number of Linux-based applications, such as Stunnel, Apache, Postfix, or others.

UPDATE: It turns out this is a duplicate post, originally covered earlier here. Sorry!

Tags: , , ,

Quite a long time ago, I posted two short articles on transparent RDP tunneling (read more here and here).  To be honest, I had forgotten that I hadn’t posted more complete details on how exactly I went about making it work.  So, to rectify that problem, here are the full details.

First, some background.  I have a number of customers whose servers I manage remotely via Remote Desktop.  Remote Desktop (or Terminal Services, if running in Application Server mode), as you may be aware, uses Microsoft’s RDP as the protocol.  Not comfortable using RDP across the Internet, I always encrypted RDP using SSL (typically via Stunnel), but this proved unwieldy as the number of servers increased.  I needed a way that I could use any ordinary RDP client from within my office and transparently tunnel that RDP traffic inside SSL.

<aside>The reason this became unwieldy is due to the number of client-side definitions I had to create on my Mac OS X laptop using SSL Enabler.  After a while, it become difficult to remember which local port corresponded with each remote server.</aside>

So, using OpenBSD (then version 3.7, now version 3.8), I first added some additional IP addresses to the le1 interface by modifying the /etc/hostname.le1 file like so:


Using ping, I verified that the new IP addresses were responding, then proceeded to configure Stunnel to accept unencrypted connections and forward them to another host as encrypted connections.  The Stunnel configuration looked something like this:

client = yes
accept  =
connect =

I also had to add the “ms-wbt-server” to the /etc/services file with the appropriate port numbers (3389).

On the other end of the tunnel, Stunnel was set up in reverse—it was configured to receive an encrypted connection on port 54321 (for example) and forward that as an unencrypted connection to the standard RDP port (3389).  The Stunnel configuration looked something like this:

CApath = c:\winnt\system32\stunnel
cert = c:\winnt\system32\stunnel\stunnel.pem
client = no
service = SSLTunnel
accept = 54321
connect = 3389

Again, the “ms-wbt-server-s” (for “secure”) had to be added to to the services file (on Windows boxes typically located in “C:\winnt\system32\drivers\etc”).  Then I registered Stunnel to run as a service (I believe the command-line was “stunnel -s <config file name>” or similar).  Upon starting the service, I verified that we now had a listening port using “netstat -an”.

When all looked good, I configured any firewalls to pass the appropriate traffic and tested the connection.  Done!  I was now able to connect to one of the internal IP addresses on the OpenBSD server using a standard, unencrypted RDP connection.  That connection was then encrypted in SSL and forwarded across the Internet to a waiting Stunnel instance, where it was decrypted and handed off to the RDP listener.

With a few modifications, this approach could easily be used to create application-specific VPNs between multiple locations within the same organization, or between different organizations.

Tags: , , , , ,

One of my projects involved the configuration of GRE (Generic Routing Encapsulation) tunnels, encrypted by IPSec, between two locations. I was having some problems getting the tunnels to work properly, but now I’ve managed to resolve that problem, and the configuration is working well. Here’s some additional information on the problem and how it was finally corrected.

This was my first project using GRE tunnels. I’d used IPSec tunnels many times, and on many different platforms, but this time around we needed an interface that could be tracked for HSRP (Hot Standby Router Protocol) purposes, and until recently Cisco didn’t offer IPSec tunnel interfaces. (I just came across some documentation last night that indicated very recent releases of IOS offer this functionality.) So, the idea was to use GRE tunnels, track the GRE tunnels using HSRP for failover with another router, and encrypt the traffic using IPSec in transport mode.

The GRE tunnel configuration (scrubbed for sensitive data) looked something like this originally:

interface Tunnel0
 description GRE tunnel to other location
 ip address
 tunnel source FastEthernet0/0
 tunnel destination
 crypto map tunnel-ipsec-map

Of course, there was an appropriately configured interface at the other end of the tunnel as well. The tunnels came up, and appeared to work just fine, until we added the keepalive statement. (The keepalive statement is required for the tunnel to report an actual up/down status, necessary for HSRP interface tracking.) Then they went down and stayed down.

A “debug tunnel” statement showed that the keepalives were being sent, but none were being received. Thinking perhaps the IPSec configuration was incorrect, I removed the “crypto map” statement from the tunnel interface. It still didn’t work.

After reviewing the configuration again, I began to suspect an MTU issue—the “show int tun0″ output listed an MTU of 1514. I consulted with a Cisco expert (recently obtained his CCIE), and he confirmed that it was most likely an MTU issue. So I modified the configuration to look like this:

interface Tunnel0
 description GRE tunnel to other location
 ip address
 ip mtu 1400
 tunnel source FastEthernet0/0
 tunnel destination

At that point, the tunnel finally came up and I was able to pass traffic through the tunnel. I re-added the “crypto map” statement to enforce encryption, and the tunnels promptly went back down again.

Once again, debug output saved the day. The output from a “debug crypto” statement was constantly reporting “packet too small”. A search of the Cisco web site turned up a result (I can’t find it now) that indicated a bug within IOS and suggested the addition of a “tunnel key” statement. So, I modified the configuration again:

interface Tunnel0
 description GRE tunnel to other location
 ip address
 ip mtu 1400
 tunnel source FastEthernet0/0
 tunnel destination
 tunnel key 12345
 crypto map tunnel-ipsec-map

With this configuration, the IPSec/ISAKMP SAs were established and the tunnels came up, passing traffic as expected. The debug output showed no crypto errors, and keepalives were being sent and received. Success!

Tags: , , , ,

« Older entries