Storage Time Bomb?

The “ticking storage time bomb,” as described in this article, is the fact that all virtual servers share the same WWN (World Wide Name) on the Fibre Channel HBA.

Quoting from the article:

In a virtual server mode, all of the server instances can see and access the same HBA – and all the same logical unit numbers (LUN) attached to it.

Fair enough, at first glance.  At first glance, you might say, “Wow!  He’s right—all those servers share the same HBA going out to my Fibre Channel SAN, and I’m doing all my zoning and LUN presentation based on HBA.  I’m in trouble!”

But look a little deeper and it appears that there is a fundamental flaw in this argument:  The virtual servers don’t have access to the HBA.  In fact, they don’t even know the HBA exists.  They don’t know the SAN exists.  Even if they knew the HBA and the SAN existed, they still don’t have direct access to that hardware—they have to go through the virtualization layer.

What driver does a Windows guest running on VMware use to access its hard disk?  An LSI Logic or BusLogic SCSI controller. Not an Emulex HBA driver, or a Qlogic HBA driver, but a standard, ordinary SCSI controller.  So how exactly will this Windows guest gain access to LUNs on a SAN that it doesn’t know exists and for which it has no drivers?  Or am I just missing something here?

Unless I’m way off (which is certainly very possible), this article is completely misguided.  Yes, N-Port Virtualization is a good thing, but not necessarily for security.  Yes, zoning and LUN masking are important components of a Fibre Channel SAN, but those concepts don’t apply to virtual servers who don’t know the SAN exists, don’t have any drivers for SAN hardware, don’t have any SAN hardware, and couldn’t access the SAN directly if they wanted to.  You have to remember that VMware (and presumably Xen and others, although I don’t know that for certain) are hiding these details from the guest operating systems.

Am I missing some crucial detail?

UPDATE:  Some of you may have already considered this fact, but add this to the equation:  VMware Consolidated Backup uses a backup proxy server, running Windows Server 2003 or later, that must have access to the same SAN LUNs as the ESX Server hosts.  In this instance, I would consider this to be a potential security problem.  Make sure you properly secure and harden the backup proxy!

Tags: , , , ,


  1. J.Cruz’s avatar

    The only thing I can think of is that they are specifically referring to Raw Device Mappings in VMWare. But then, you’re bypassing the virtualization layer, right? Which means you are conciously choosing to break out of that layer and grant raw access to the LUN?

    I suppose if a host in your HA Cluster was compromised, which would necessarily have to see the same LUNs as the RDM VM, is it possible that a VM could be brought online with access to those LUNs?

  2. J.Cruz’s avatar

    Hey, Scott. The only thing I can think of is that they are referring to Raw Device Mappings, which bypasses the virtualization layer. Is it possible to another host inyour ESX Farm that shares the same LUN mappings as the one that hosts your VM-with-RDM, to be compromised, and another VM-with-RDM brought online with those same RDMs and then, bam, you’ve got access to the SAN LUNs you shouldn’t have?

  3. slowe’s avatar


    I suppose that is a possibility, but even then you have a VMDK involved, and that VMDK is associated with a LUN ID (the VMDK in this case just stores metadata, not real data). If that is the case, you don’t even need a DRS/HA cluster to be vulnerable–remember that simple VMotion requires access to the back-end SAN LUN as well.

    I’ll have to dig into RDMs a bit more to see if you could be right. Any RDM experts want to chime in here?

Comments are now closed.