Republished: Dispelling Some VMware over NFS Myths

Author’s Note: This content was first published over at Storage Monkeys, but it appears that it has since disappeared and is no longer available. For that reason, I’m republishing it here (with minor edits). Where applicable, I’ll also be republishing other old content from that site in the coming weeks. Thanks!

In this article, I’m going to tackle what will probably be a sensitive topic for some readers: VMware over NFS. All across the Internet, I run into article after article after article that sings the praises of NFS for VMware. Consider some of the following examples:

That first link looks to be mostly a reprint of this blog post by Nick Triantos. Now, Nick is a solid storage engineer; there is no question in my mind that he knows Fibre Channel, iSCSI, and NFS inside out. Nick is certainly someone who is more than qualified to speak to the validity of using NFS for VMware storage. But…

I am going to have to disagree with some of the statements that are being propagated about NFS for VMware storage. Is NFS for VMware environments a valid choice? Yes, absolutely. However, there are some myths about NFS for VMware storage that need to be addressed.

  1. Myth #1: All VMDKs are thin provisioned by default with NFS, and that saves significant amounts of storage space. That’s true—to a certain point. What I pointed out back in March of 2008, though, was that these VMDKs are only thin provisioned at the beginning. What does that mean? Perform a Storage VMotion operation to move those VMDKs from one NFS datastore to a different NFS datastore, and the VMDK will inflate to become a thick provisioned file. Clone another VM from the VM with the thin provisioned disks, and you’ll find that the cloned VM has thick VMDKs. That’s right—the only way to get those thin provisioned VMDKs is to create all your VMs from scratch. Is that what you really want to do? (Note: VMware vSphere now supports thin provisioned VMDKs on all storage platforms, and corrects the issues with thin provisioned VMDKs inflating due to a Storage VMotion or cloning operation, so this point is somewhat dated.)
  2. Myth #2: NFS uses Ethernet as the transport, so I can just add more network connections to scale the bandwidth. Well, not exactly. Yes, it is possible to add Ethernet links and get more bandwidth. However, you’ll have to deal with a whole list of issues: link aggregation/802.3ad, physical switch redundancy (which is further complicated when you want to use link aggregation/802.3ad), multiple IP addresses on the NFS server(s), multiple VMkernel ports on the VMware ESX servers, and multiple IP subnets. Let’s just say that scaling NFS bandwidth with VMware ESX isn’t as straightforward as it may seem. This article I wrote back in July of 2008 may help shed some light on the particulars that are involved when it comes to ESX and NIC utilization.
  3. Myth #3: Performance over NFS is better than Fibre Channel or iSCSI. Based on this technical report by NetApp—no doubt one of the biggest proponents of NFS for VMware storage—NFS performance trails Fibre Channel, although by less than 10%. So, performance is comparable in almost all cases, and the difference is small enough not to be noticeable. The numbers do not, however, indicate that NFS is better than Fibre Channel. You can read my views on this storage protocol comparison at my site. By the way, also check the comments; you’ll see that the results in the technical report were independently verified by VMware as well. Based on this information, someone could certainly say that NFS performance is perfectly reasonable, but one could not say that NFS performance is better than Fibre Channel.

Now, one might look at this article and say, “Scott, you hate NFS!” No, actually, I like using NFS for VMware Infrastructure implementations, and here’s why:

  • Provisioning is a breeze. It’s dead simple to add NFS datastores.
  • You can easily (depending upon the storage platform) increase or decrease the size of NFS datastores. Try decreasing the size of a VMFS datastore and see what happens!
  • You don’t need to deal with the complexity of a Fibre Channel fabric, switches, WWNs, zones, ISLs, and all that. Now, there is some complexity involved (see Myth #2 above), but it’s generally easier than Fibre Channel. Unless you’re a Fibre Channel expert, of course…

So there are some tangible benefits to using NFS for VMware Infrastructure. But let’s be real about this, and not try to gloss over technical details. While NFS has some real advantages, it also has some real disadvantages, and organizations choosing a storage protocol need to understand both sides of the coin.

Tags: , , ,

  1. Ken Carlile’s avatar

    I’m curious about your feelings on iSCSI. I’ve already committed to it (in a very small environment), but it would be nice to know if I’ve chosen wrong.

  2. Brian Norris’s avatar

    Hi Scott, Great article. I agree, the NFS vs FC vs iSCSI question comes up alot, but more often its NFS vs iSCSI. Ill be honest and say im bias and tend to lean towards iSCSI, not because I think iSCSI is better but mainly because its what I know, and most important if you look back over the last few years Im pretty sure VCB, Storage Vmotion, SRM were all supported on iSCSI before NFS, in fact I must go have a look at the support matrix to see if NFS is supported yet, but I suspect not till the next release of SRM.

  3. slowe’s avatar

    Brian, you are correct–NFS is not yet supported on SRM. The next major SRM release is slated to include NFS support.

  4. Andrew Miller’s avatar

    Any word on a timeline for said SRM release?

  5. wer’s avatar

    Don’t forget that in ESX versions before vSphere – there is no such thing as “thin provisioned” on NFS, contrary to VMware’s documentation. VMware support indicated that this will be fixed in version 4 (vSphere).

    Upside to this if you are running NFS you should be using NetApp, and can then use dedupe, but still doesn’t help that creating thousands of VM’s and templates in the initial deployment of large ESX clusters over NFS can be a pain without thin provisioning.

  6. slowe’s avatar

    VMDKs do start out thin, but quickly grow thick. I discussed this in a post on my site some time ago. And, based on my experience, this behavior is corrected in VMware vSphere 4.

  7. Peter’s avatar

    NetApp also have a very nice utility called RCU (Rapid Clone Utility) which utilises Flex Clone on NetApp which keeps clones small. It integrates in VC as well. Details here http://now.netapp.com/NOW/download/software/rapid_cloning/2.0.1/ if you have a NOW login.

  8. Philip Arnason’s avatar

    With the advent of 10GB ethernet, performance CAN be better than fiber channel. We are running 10GB and have all our datastores on NFS and loving it.

  9. Slav Pidgorny’s avatar

    There is another reason why I prefer NFS over FC: Fibre channel analysers are expensive equipment while IP and Ethernet analysis tools are readily available on you platform of choice. I have seen situations with both NAS and SAN when performance was suboptimal, and figuring out what’s wrong is much more complicated with FC SAN. So is replacing parts of the storage infrastructure.

  10. Cliff’s avatar

    This article states that the “Think – to – thick” issue is fixed in vpshere 4, yet when I download a VM from the datastore to my PC, it still expands. Is there a patch or update I’m missing? I am running esxi4.

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>