Virtualization Short Take #31

Welcome back to yet another Virtualization Short Take! Here is a collection of virtualization-related items—some recent, some not, but hopefully all interesting and/or useful.

  • Matt Hensley posted a link to this VIOPS document on how to setup VMware SRM 4.0 with an EMC Celerra storage array. I haven’t had the chance to read through it yet.
  • Jason Boche informs us that both Lab Manager 3 and Lab Manager 4 have problems with the VMXNET3 virtual NIC. In this blog post, Jason describes how his attempts to install Lab Manager server into a VM with the VMXNET3 NIC was failing. Fortunately, Jason provides a workaround as well, but you’ll have to read his article to get that information.
  • Bruce Hoard over at Virtualization Review (disclaimer: I write a regular column for the print edition of Virtualization Review) stirred up a bit of controversy with his post about Hyper-V’s three problems. The first problem is indeed a problem, but not an architectural or technological problem; VMware is indeed the market leader and has a quite solid user base. The second two “problems” stem from Microsoft’s architectural decision to embed the hypervisor into Windows Server. Like any other technology decision, this decisions has its advantages and disadvantages (these technology decisions are a real double-edged sword). Based on historical data, it would seem that the need to patch Windows Server will impact the uptime of the Windows virtualization solution; however, this is not to say that VMware ESX/ESXi are not without their patches and associated downtime as well. I guess the key takeaway here is that VMware seems to be doing a much better job of lessening (or even removing) the impact of the downtime through things like VMotion, DRS, HA, maintenance mode, and the like.
  • Apparently there is a problem with the GA release of the Host Update utility that is installed along with the vSphere Client, as outlined here by Barry Coombs. Downloading the latest version and reinstalling seems to fix the issue.
  • And while we are on the subject of ESX upgrades, here’s another one: if the /boot partition is too small, the upgrade to ESX 4.0.0 will fail. This isn’t really anything too new and, as Joep points out, is documented in the vSphere Upgrade Guide. I prefer clean installations of VMware ESX/ESXi anyway.
  • Dave Mishchenko details his adventures (part 1, part 2, and part 3) in managing ESXi without the VI Client or the vCLI. While it’s interesting and contains some useful information, I’m not so sure that the exercise is useful in any way other than academically. First of all, Dave enables SSH access to ESXi, which is unsupported. Second, while he shows that it’s possible to manage ESXi without the VI Client or the vCLI, it don’t seem to be very efficient. Still, there is some useful information to be gleaned for those who want to know more about ESXi and its inner workings.
  • I think Simon Seagrave and Jason Boche were collaborating in secret, since they both wrote posts about using vSphere’s power savings/frequency scaling functionality. Simon’s post is dated October 27; Jason’s post is dated November 11. Coincidence? I don’t think so. C’mon, guys, go ahead and admit it.
  • Thinking of using the Shared Recovery Site feature in VMware SRM 4.0? This VMware KB article might come in handy.
  • I’m of the opinion that every blogger has a few “masterpiece” posts. These are posts that are just so good, so relevant, so useful, that they almost transcend the other content on the blogger’s site. Based on traffic patterns, one of my “masterpiece” posts is the one on ESX Server, NIC teaming, and VLAN trunking. It’s not the most well-written post I’ve ever published, but it seems to have a lasting impact. Why do I mention this? Because I believe that Chad Sakac’s post on VMware I/O queues, microbursting, and multipathing is one of his “masterpiece” posts. Like Scott Drummonds, I’ve read that post multiple times, and every time I read it I get something else out of it, and I’m reminded of just how much I have yet to learn. Time to get back out of that comfort zone!
  • Oh, and speaking of Chad’s blog…this post is handy, too.

That’s all for now, folks. Stay tuned for the next installation, where I’ll once again share a collection of links about virtualization. Until then, feel free to share your own links in the comments.

Tags: , , , , , , ,

12 comments

  1. Jason Boche’s avatar

    I wasn’t aware Simon Seagrave wrote about CPU power management savings a few weeks ago. Apparently I have duplicated his efforts unknowingly. I will be sure to link to Simons article now that I am aware of it. Thank you Scott for making me aware of this and thank you Simon for the fantastic CPU power management article at http://www.techhead.co.uk/saving-power-with-vmware-vsphere-esx-dynamic-voltage-and-frequency-scaling-dvfs

  2. slowe’s avatar

    No worries, Jason, I was really just teasing you and Simon about the duplicate efforts. In fact, after I published the post I asked myself, “Was power scaling one of the topics of the vSphere blog contest and I didn’t even know?” Anyway, it happens all the time, and clearly your article wasn’t a duplicate of Simon’s. Although they shared the same topic, they are different articles. It just goes to show that great minds think alike!

  3. Duncan’s avatar

    I must say that some bloggers have more masterpieces than others. Chad’s, Scott’s, Vaughn’s and your blog contain many invaluable articles! Not just a couple.

    Historically speaking my HA and DRS related articles have scored high. Especially both the HA and DRS deepdive score at least 500 unique views per day.

  4. Nate’s avatar

    I see little value in linking to the Bruce Hoard bit. It is completely lacking of substance or knowledge. Hyper-V now has live migration, so I fail to see how VMware is “lessening” the impact of patches over Hyper-V. No matter your hypervisor, it will need to be patched. No matter your hypervisor patching will often mean rebooting said hypervisor. No matter your hypervisor, rebooting will mean the VMs go down (which is why you live migrate). The author and “analyst” fail to recognize you would install in a core deployment minimizing exposure. “Historical data” shows that VMware has had a larger patch footprint than Hyper-V. I’m also not sure the “entrenched” user base is as large of an issue as it is made out to be. For one there are still plenty of customer who have not yet made the move to virtualization. You have to assume they are the main target at the moment. You look to hit the “converts” next hoppign to get wins during upgrade cycles when they are going to have to move to something new anyways.

  5. slowe’s avatar

    Nate,

    I agree that the addition of live migration to Hyper-V certainly changes the picture. I would also agree that converts are the primary targets here, not the existing installed base.

    I would disagree with the “historical data” that shows VMware with a larger patch footprint than Microsoft; that is a fallacy foisted by Microsoft. This “larger patch footprint” is due to the fact that VMware updates (by choice, not by necessity AFAIK) the entire ESXi image.

    Otherwise, good comments. Thanks for sharing Nate!

  6. Nate’s avatar

    I’ll concede patch physical footprint matters little. I’d venture in return maybe it’s time that VMware folks concede the hypervisor physical footprint matters equally as little. I’d propose a better measure would be downtime due to patching. Now being that both options offer live migration downtime is theoreticaly zero for either option, so that is somewhat a moot point. So, you might want to look at reliability of patches. From that standpoint Microsoft has developed a rather tried and true patch delivery system. VMware’s track record around patch reliability has been below standards from my perspective.

  7. slowe’s avatar

    I agree that hypervisor footprint really isn’t that big of a deal. The architecture between VMware ESX/ESXi and solutions such as Hyper-V and XenServer are so great that it is difficult to make comparisons between the two—they are fundamentally different at the core. I personally believe that VMware’s approach is better than Microsoft’s and Citrix’s approach, but that is my personal opinion.

    Can you elaborate as to the specifics that lead you to state that VMware’s patch reliability “has been below standards”? We are all aware of the ESX 3.5 Update 2 fiasco, but if that’s all you have to go on that’s pretty weak, IMHO.

  8. Nate’s avatar

    Examples from your own blog:

    http://blog.scottlowe.org/2008/08/11/apparent-datetime-issue-with-update-2/
    http://blog.scottlowe.org/2008/12/23/random-reboots-with-vmware-esx-35-update-3/
    http://blog.scottlowe.org/2008/12/12/vmware-ha-problem-with-update-3/

    I’m not trying to start a religious war. I’d hope you personally believe that VMware has the better approach. Otherwise I’d question why you are writing books about the product. Personally I think Hyper-V has the better approach for the simply selfish reason that the product has worked better for me and my needs. I simply didn’t find any value in the Virtualization Review piece. It lacked anything concrete and was simply a regurgitation of some VMware talking points. I’d expect more from “analysts”. I was confused as to why you linked to it and then made the incorrect statement of VMware lessening the impact of patches vs. Hyper-V. On a semi-related side note. One thing I’ve noticed since bumping some systems to 2008 R2 is an increase in patches that do NOT require a reboot. Again somewhat a moot point from the virtualization standpoint because of live migration, but I was happy having less need to reboot some of the app servers (which is something else lost in this conversation, no matter what you still need to patch the VMs themselves and therefore at times reboot). Of course that could simply be due to the “freshness” of the OS and we might see an increase in reboot requirements as it ages like all things.

  9. slowe’s avatar

    Nate,

    No religious wars here—just open, frank, and honest discussions.

    Those are (obviously) all valid examples. Of course, I’d already mentioned the Update 2 timebomb (the first link above), so we only have 3 examples of “below standard” QA process. I’d agree that the QA process can improve (everyone’s can improve), but I don’t know that I would deem it “below standard”.

    In any case, my comment about VMware “lessening” the impact is that it is, to my knowledge, still easier to mitigate the impact of patching. I’ll fully concede that I could be misinformed here, but does Microsoft have a solution to evacuate all VMs from a host before patching it? Or do VMs have to be manually evacuated? As consolidation ratios increase—and I’m assuming that you want higher consolidation ratios—this becomes more important. Live migration helps, most certainly, but there is more to this story than just live migration.

    And you are absolutely correct about patching the guest OS inside the VMs; I’m glad you pointed that out. And I’m also glad to hear that Microsoft is making headway in producing patches that don’t require reboots.

    Good discussion!

  10. Nate’s avatar

    2 updates in a row causing 3 major issues is below my standards. Like opinions everyone has their own standards, so it is fine we differ here. Every product is going to have a little glitch here or there with a patch, but those were some major issue close together that really didn’t do anything to boost confidence in VMware’s QA process (at least not for me).

    With VMM R2 you get “maintenance mode” which does in fact give you automatic evacuation of all VMs. Here’s the new features of VMM R2:

    http://www.microsoft.com/systemcenter/virtualmachinemanager/en/us/r2.aspx

  11. Nate’s avatar

    Sorry, meant to provide this link (though you can get here from the link above as well):

    http://www.microsoft.com/systemcenter/virtualmachinemanager/en/us/whats-new-R2.aspx

Comments are now closed.