Virtualization Short Take #34

Welcome to Virtualization Short Take #34, my occasionally-weekly collection of virtualization-related links, posts, and comments. As usual, this information is a hodge-podge of information I’ve gathered from across the Internet over the last few weeks. I hope that you find something useful or helpful here, and thanks for reading!

  • First up is Arne Fokkema’s PowerCLI script to check Windows VM partition alignment. As one commenter pointed out, the fact that the starting offset isn’t 65536—which is what Arne’s script checks—doesn’t necessarily mean that it isn’t aligned. Generally, you can align a Windows partition by setting the starting offset to any number that is evenly divisible by 4096 (4K). If I’m not mistaken, setting the partition offset to 65536 (64K) also ensures that the partition is stripe-aligned on EMC arrays.
  • Here’s a useful reminder to be sure to keep your dependencies in mind when designing VMware vSphere 4 environments. If you design your environment to rely upon DNS—a common situation, since VMware HA is particularly sensitive to name resolution—then be sure to appropriately architect the DNS infrastructure. This “circular dependency” is one reason why I personally tend to keep vCenter Server on a physical system. Otherwise, you have the virtualization management solution running on the infrastructure it is responsible for managing. (Yes, I know that it’s fully supported for it to be virtualized and such.)
  • Forbes Guthrie’s article on incorporating Active Directory authentication and sudo into the kickstart process is a good read. With regard to his note about enabling root SSH access because of an inability to access the Active Directory DCs: I know that in ESX 3.x you could still log in at the Emergency Console when Active Directory connectivity was unavailable; does anyone know if this is still the case with ESX 4.0? I haven’t taken the time to test it yet.
  • Oh, and speaking of Active Directory authentication, Forbes also published this note about Likewise AD authentication supposedly included in ESX 4.1. Looks like someone at Likewise accidentally spilled the beans…
  • I’m sure that everyone has seen the article by Duncan about the ESX 3.x bug that prevents NIC teaming load balancing from working on the global vSwitch configuration, but if you haven’t—well, now you have. Here’s the corresponding KB article, also linked from Duncan’s article. Duncan also recently published a note about an error while installing vCenter Server that is related to permissions; read it here.
  • Are there even better days ahead for virtualization and those involved in virtualization? David Greenfield of Network Computing seems to think so. The comments in the article do seem to bear out my statements that virtualization experts now need to move beyond consolidation and start helping customers tackle the Tier 1, high-end applications. I believe that this is going to require more planning, more expertise, and more knowledge of the applications’ behaviors in order to be successful.
  • Stephen Dion of virtuBLOG brings up a compatibility issue with Intel quad-port Gigabit Ethernet network adapters when used with VMware ESX 4.0 Update 1. Anyone have any updates or additional information on this issue?
  • If you’re considering virtualizing Exchange Server 2010 on VMware vSphere, be sure to read Kenneth’s article here about Exchange 2010 DAGs and VMotion. At least live migration isn’t supported on Hyper-V, either.
  • Want to run a VM inside a VM? This post on nested VMs over at the VMware Communities site has some very useful information.
  • Paul Fazzone (who I believe is a product manager for the Nexus 1000V) points out a good point-counterpoint article with Bob Plankers and David Davis that discusses the benefits and drawbacks of the Cisco Nexus 1000V. Both writers make excellent points; I guess the real conclusion is that both options offer value for different audiences. Some organizations will prefer the VMware vSwitch (or Distributed vSwitch); others will find value in the Cisco Nexus 1000V. Choice is a beautiful thing.
  • Jason Boche published some performance numbers for the EMC Celerra NS-120 that he’s recently added to his home “lab” (I use the term “lab” rather loosely here, considering the amount of equipment found there). Not surprisingly, Fibre Channel won out over software iSCSI and NFS, but Jason’s numbers showed a larger gap than many expected. I may have to repeat these tests myself in the EMC lab in RTP to see what sorts of results I see. If only I still had the NS-960 that I used to have at ePlus….sigh.
  • Joep Piscaer has a good post on Raw Device Mappings (RDMs) that definitely worth a read. He’s pulled together a good summary of information on RDMs, such as requirements, limitations, use cases, and frequently asked questions. Good job Joep!
  • Ivo Beerens has a pretty detailed post on multipathing best practices for VMware vSphere 4 with HP EVA storage. The recommendation is to use Round Robin with ALUA and to reduce the IOPS limit to 1. Ivo also presents a possible workaround to the IOPS “random value” bug that Chad Sakac discussed in this post some time ago.
  • Here’s yet another great diagram by Hany Michael, this time on ESX memory management and monitoring.
  • This post tells you how to modify your VMware Fusion configuration files to assign IP addresses for NAT-configured VMs. If you’re familiar with editing dhcpd.conf on a Linux system, the information found here on customizing Fusion should look quite familiar.
  • Back in 2007, I wrote a piece on using link state tracking in blade deployments. This post wasn’t necessarily virtualization focused, but certainly quite applicable to virtualization environments. Recently I saw this article pop up on using link state tracking with VMware ESX environments. It’s good to see more people recommending this functionality, which I feel is quite useful.
  • Congratulations to Mike Laverick of RTFM, who this past week announced that TechTarget is acquiring RTFM and its author, much like TechTarget acquired BrianMadden.com (and its author) last year. Is this a new trend for technical blog authors—build up a readership and then “sell it off” to a digital media company?

Here are some additional links that I stumbled across, but for which I haven’t yet fully assimilated or processed. You might see some more in-depth blog posts about these in the near future as they work their way through my consciousness.

Lab Experiment: Hypervisors (Virtualization Review)
The Backup Blog: Avamar and VMware Backup Revisited
VMware vSphere Capacity IQ Overview – I’m Impressed!

Well, that wraps it up for now. Thanks for reading and feel free to speak out in the comments below.

Tags: , , , , ,

  1. Ben’s avatar

    I’m curious why there’s almost nothing about anything Xen-related in your article (this any many others). Do you consider Xen a lost cause?

  2. TimC’s avatar

    That celerra benchmarking review is a joke. He’s using 2Gb FC vs. 1Gb NFS/iSCSI going through a 50$ netgear switch, and not even enabling jumbo frames.

    Not to mention, 35,000 IOPS out of 15 drives in a raid-5? If that doesn’t throw up the alarms, I don’t know what will… Let me know where I can get these magical 15k drives that do 2,300 IOPS/piece.

    Cloning a 270GB VM on FC and swiSCSI takes about 16 miniutes.
    Cloning a 270GB VM on NFS takes nearly an hour.

    He’s doing something very, very wrong.

  3. Brian Johnson’s avatar

    Updated: Stephen Dion of virtuBLOG brings up a compatibility issue with Intel quad-port Gigabit Ethernet network adapters when used with VMware ESX 4.0 Update 1.

    Disclaimer: Patrick works in the LAN Access Division at Intel Corp, supporting virtualization technologies.

    The post Patrick made on virtuBLOG —

    It is unfortunate that you had challenges with the driver for the Intel Quad Port GbE adapters. You did not specify which particular Intel Quad Port GbE adapters you are using – are they the 82575 or are they the 82576 adapters?

    I ask because there is a big difference. The Intel 82575 GbE adapters do have in-box support from VMware, however the Intel 82576 GbE Quad Port controllers do not (the dual port does, but not the quad). VMware decides what drivers they support with in-box releases.

    Intel provide updated Ethernet drivers to VMware, who in turn wraps them in an esxupdate package and posts them to their web site. The latest igb driver from Intel supports the Quad Port 82576 GbE adapters and is available as you pointed out here.
    http://downloads.vmware.com/d/details/esx_esxi40_intel_82575_82576_dt/ZHcqYmR0QGpidGR3

    What seems to be missing on this page is actual installation instructions. On previous releases such as the one for ESX 3.5 ( http://www.vmware.com/support/vi3/doc/drivercd/esx35-igb-350.1.3.8.6.3.html ) VMware provided installation instructions for updating the drivers that do not require as many steps as you unfortunately had to go through. The basic steps are:
    1. Mount the ISO
    2. Change dir to /VMupdates/RPMS in the ISO
    3. Run esxupdate update

    The script will run and install the driver, rebooting the server after it has finished. After the reboot has occurred, the Intel® 82576 Gigabit Ethernet Controller should now be listed when you perform the following command:
    #esxcfg-nics –l

    I will see if I can talk to somebody about making sure the installation instructions get added to the download page for the ESX 4.0 Intel igb driver here. http://downloads.vmware.com/d/details/esx_esxi40_intel_82575_82576_dt/ZHcqYmR0QGpidGR3

    Patrick Kutch
    System Manageability & Virtualization TME, Intel Corporation

  4. Tim H’s avatar

    Hi Scott. I hope you are well.

    Kenneth’s article about DAG’s not being supported in an root clustered or VMotion/HA environment is correct and is a good point.

    There are also many other support issues around Exchange 2010 in a virtual environment which are outlined at the bottom of the Exchange 2010 system requirments page at http://technet.microsoft.com/en-us/library/aa996719.aspx under the Hardware Virtualization section.

    Most notably, is the 2:1 virtual to physical processor limit. This means that as far as Exchange 2010 is concerned, a physical host with 8 cores can have no more than 16 virtual processors across all VMs on that host or it will be unsupported.

    Also no shapshots and no dynamically expanding disks.

    My point is that, in terms of proposing supported configurations, architects have plenty to worry about before they even get to the issue of DAGs.

    -Tim-

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>