Another Round with iSCSI and ESX Server 3

There were two driving factors that led me to rebuild the iSCSI-based storage that was currently serving the test lab, instead of continuing to use the Data ONTAP Simulator.  First, we had acquired a Gigabit Ethernet backbone for the test lab, and I wasn’t convinced that the Data ONTAP Simulator was taking full advantage of the Gigabit Ethernet NICs I installed in the server.  I also wanted to test bonding some NICs together for more throughput, or possibly to try some multipathing.  I couldn’t do either of those with the Data ONTAP Simulator.  Note that this is not a knock against the Data ONTAP Simulator; it’s not designed or intended for those kinds of things.  (It would be great if I could get a real NetApp device in the lab, but I don’t know if that will ever happen.)

After doing some brief research, I settled on using CentOS 4.3 and the iSCSI Enterprise Target (IET).  The installation of CentOS was straightforward and simple, and the installation of IET was equally simple, due perhaps to these fairly detailed instructions for installation of RHEL 4 (which are equally applicable to CentOS 4).  I heartily recommend, based on my experience so far, using the source RPMs instead of building from source.  It made the process easy and (almost) painless.

I setup a 10GB logical volume using LVM2 and configured IET to present it via iSCSI by editing /etc/ietd.conf to show this:

    IncomingUser isanuser secretpw
    Lun 0 Path=/dev/VolGroup00/lvol0,Type=fileio

(Obviously, you’d need to adjust this as appropriate for your own installation.)

Having already learned my lesson regarding the ESX firewall, I ensured that the software iSCSI initiator traffic was allowed outbound before continuing (refer here for more details).  Using the Virtual Infrastructure client, I reconfigured the ESX 3 server to see the new iSCSI server, and the new LUN popped up immediately upon a rescan.  From there, it was a simple operation to establish a new VMFS datastore on the iSCSI LUN and move a VM to the LUN.  That was easy!

The next steps will be to do some performance tuning, test bonding the NICs and/or multipathing, and perform some NFS interoperability tests.  (Remember that NFS is also supported by ESX 3 for datastores.)

Tags: , , , , ,


  1. John Troyer’s avatar

    Hi Scott –

    Great post; I linked to it from the VMTN Blog. Good luck with the project, and be sure to check out the VMTN Forums if you run into difficulty.

    Feel free to drop me a line if I can be of any assistance. Let me know if you’re going to VMworld — I’m coordinating blogger activities and resources there.


  2. slowe’s avatar


    Thanks for the link! Yes, I’ll be at VMworld this year (my first time!) and I’m looking forward to it. Hope to see you there.

  3. DR Shaw’s avatar

    Any updates? iSCSI is great for ‘virtual’ disk

  4. slowe’s avatar

    DR Shaw,

    I’m continuing to use the iSCSI Enterprise Target in the test lab, and have performed some performance tuning. Multipathing isn’t really an option; this is due to the way ESX Server 3.0.1 implements the software iSCSI initiator. Using a hardware iSCSI initiator would change that situation, of course. Taking advantage of ESX Server’s NIC bonding, however, seems to work very well.

    My only complaint about iET is that it isn’t quite as stable as I would like; it’s crashed a couple of times and taken down the VMware farm. (Interesting side note: Some VMs are more tolerant of the loss of disk connectivity than others. Windows, for example, doesn’t like it very much.)

  5. Richard’s avatar

    Hi Scott,

    you mention that you use ESX NIC bonding for accessing you iSCSI-target. I’m trying to do the same, but can’t get it to work (a single connection works perfectly, but I would like to have a bit more performance and some fault tolerance).

    Can you eleborate some more about the settings you made on the ESX-side to take advantage of NIC bonding.
    I’m using Open-e iSCSI target (wich is based on linux and uses the Linux Ethernet Bonding driver)

    Kind regards,


  6. slowe’s avatar


    At the time, I was simply bonding the NICs within ESX Server, and I didn’t have any problems with that. I didn’t try any bonding on the iSCSI target; instead, I used multiple NICs with multiple IP addresses and manually “load balanced” the inbound traffic. This was using an unmanaged Gigabit Ethernet switch, by the way.

    Since then, I have moved to a managed Cisco Gigabit Ethernet backbone and setup link aggregation (802.3ad) and trunking to the ESX Servers, and I switched to a real Network Appliance storage system for iSCSI storage. Unfortunately, the NetApp has only a single Gigabit Ethernet NIC, so I can’t test any bonding on the storage side. You can read more abou the link aggregation and trunking configuration here:

    Good luck,

  7. Ross Walker’s avatar


    I am interested in hearing how the NetApp appliance compares to the iSCSI Enterprise Target performance-wise. Also, what were some of the stability problems you experienced with IET?

    I have had good luck with IET (outside of some bad LSI drivers initially) and want to know what NetApp provides performance-wise, or is it stability that is the factor for the large price tags?

  8. slowe’s avatar


    I haven’t performed any objective comparisons between IET and the NetApp F8xx series that I’m now using in the lab. Subjectively, however, the NetApp feels noticeably faster than the IET implementation I had previously. That is to be expected since the NetApp uses 14 physical disks to handle the traffic instead of just 4 disks with the IET setup.

    As for stability, I had the IET daemon just crash on a number of occasions, taking down the VMware farm due to the lack of access to the VMDK files. Everytime this happened, I had to restart the IET daemon, reboot all the VMs (occasionally having to rebuild VMs), and then continue from there.

    As for the NetApp “large price tag,” there’s more to NetApp than just iSCSI storage–consider Snapshots, FlexClones, SnapMirror, and all the other features that are made possible on NetApp storage. I’ve written about a few of these features:

    I hope to be able to talk more in-depth soon about SnapMirror in VMware environments, as well as discussing the use of NetApp storage to provide NFS-based VMFS datastores.

  9. Ravi’s avatar

    openfiler works pretty well and is extremely easy to set up as an iscsi target
    it’s a free linux based distro that you can optionally buy support for.

  10. Aaron’s avatar


    Great articles (the other iSCSI one is excellent), I appreciate you taking the time to share what you have done. I began working on a project creating a SAN for our VMware lab with hopes of hosting some business non-critical systems on a varient of that configuration. However, onto the second phase of that project, moving from ESXi 3.5 to ESX 3.5 I cannot get the production VMware host to see anything as provided by the iSCSI Enterprise Target. Windows finds the target just fine.

    The iSCSI SAN in production is on a different subnet, therefore I created an additional vswitch with service console and vmkernel. The firewall is configured with open ports for iSCSI.

    VMware support doesn’t support IET, therefore I had a hard time getting any troubleshooting tips from them. I have been digging so deep, I am beginning to wonder if the problem is surface level…

    Any ideas?


Comments are now closed.