Virtualization Short Take #25

Welcome to Virtualization Short Take #25, the first edition of this series for 2009! Here I’ve collected a variety of articles and posts that I found interesting or useful. Enjoy!

  • We’ll start off today’s list with some Hyper-V links. First up is this article on how to manually add a VM configuration to Hyper-V. It would be interesting to me to know some of the technical details—i.e., the design decisions that led Microsoft to architect things in this way—that might explain why this process is, in my opinion, so complicated. Was it scalability? Manageability? If anyone knows, please share your information in the comments.
  • It looks like this post by John Howard on how to resolve event ID 4096 with Hyper-V is also closely related.
  • This blog post brings to light a clause in Microsoft’s licensing policy that forces organizations to use Windows Server 2008 CALs when accessing a Windows Server 2003-based VM hosted on Hyper-V. In the spirit of disclosure, it’s important to note that this was written by VMware, but an independent organization apparently verified the licensing requirements. So, while you may get Hyper-V at no additional cost (not free) with Windows Server 2008, you’ll have to pay to upgrade your CALs to Windows Server 2008 in order to access any Windows Server 2003-based VMs on those Hyper-V hosts. Ouch.
  • Wrapping up this edition’s Microsoft virtualization coverage is this post by Ben Armstrong warning Hyper-V users about the use of physical disks with VMs. Apparently, it’s possible to connect a physical disk to both the Hyper-V parent partition as well as a guest VM, and…well, bad things can happen when you do that. The unfortunate part is that Hyper-V doesn’t block users from doing this very thing.
  • Daniel Feller asks the question, “Am I the only one who has trouble understanding Cloud Computing?” No, Daniel, you’re not the only one—I’ve written before about how amorphous and undefined cloud computing is. In this post over at the Citrix Community site, Daniel goes on to indicate that cloud computing’s undefined nature is actually its greatest strength:

    As I see it, Cloud Computing is a big white board waiting for organizations to make their requirements known. Do you want a Test/QA environment to do whatever? This is cloud computing. Do you want someone to deliver office productivity applications for you? That is cloud computing. Do you want to have all of your MP3s stored on an Internet storage repository so you can get to it from any device? That is also cloud computing.

    Daniel may be right there, but I still insist that there need to be well-defined and well-understood standards around cloud computing in order for cloud computing to really see broad adoption. Perhaps cloud computing is storing my MP3s on the Internet, but what happens when I want to move to a different MP3 storage provider? Without standards, that becomes quite difficult, perhaps even impossible. I’m not the only one who thinks this way, either; check this post by Geva Perry. Until some substance appears in all these clouds, people are going to hold off.

  • Rodney Haywood shared a useful command to use with VMware HA in this post about blades and VMware HA. He points out that it’s a good idea to spread VMware HA primary nodes across multiple blade chassis so that the failure of a single chassis does not take down all the primary nodes. One note about the using the “ftcli” command is that you’ll need to set the FT_DIR environment variable first using “export FT_DIR=/opt/vmware/aam” (assuming you’re using bash as the shell on VMware ESX). Otherwise, the advice to spread clusters across chassis as well as to ensure that primary agents are spread across chassis is advice that should be followed.
  • Joshua Townsend has a good post at about using PowerShell and SQL queries to determine the amount of free space within guest VMs. As he states in his post, this can often impact the storage design significantly. It seems to me that there used to be a plug-in for vCenter that added this information, but I must be mistaken as I can no longer find it. Oh, and one of Eric Siebert’s top 10 lists also points out a free utility that will provide this information as well.
  • I don’t have a record of where this information turned up, but this article from NetApp (NOW login required) on troubleshooting NFS performance was quite helpful. In particular, it linked to this VMware KB article that provides in-depth information on how to identify IRQ sharing that’s occurring between the Service Console and the VMkernel. Good stuff.
  • Want more information on scaling a VMware View installation? Greg Lato posts a notice about the VMware View Reference Architecture Kit, available from VMware, that provides more information on some basic “building blocks” in creating a large-scale View implementation. I’ve only had the opportunity to skim through the documents thus far, but I like what I’ve seen thus far. Chad also mentions the Reference Architecture Kit on his site as well.
  • Duncan at Yellow Bricks posts yet another useful “in the trenches” post about VMFS-3 heap size. If your VMware ESX server is handling more than 4TB of open VMDK files, then it’s worth having a look at this VMware KB article.
  • The idea of “virtual routing” is an interesting idea, but I share the thoughts of one of the commenters in that technologies like VMotion/XenMotion/live migration may not be able to respond quickly enough to changing network patterns to be effective. Perhaps it’s just my server-centric view showing itself, but it seems more “costly” (in terms of effort) to move servers around to match traffic flow than to just route the traffic accordingly.
  • CrossBow looks quite cool, but I’m having a hard time understanding the real business value. I am quite confident that my lack of understanding about CrossBow is simply a reflection of the fact that I don’t know enough about Solaris Containers or how Xen handles networking, but can someone help me better understand? What is the huge deal with Crossbow?
  • Jason Boche shares some information with us about how to increase the number of simultaneous VMotion operations per host. That information could be quite handy in some cases.
  • I had high hopes for this document on VMFS best practices, but it fell short of my hopes. I was looking for hard guidelines on when to use isolation vs. consolidation, strong recommendations on VMFS volume sizes and the number of VMs to host in a VMFS volume, etc. Instead, I got an overview of what VMFS is and how it works—not what I needed.
  • Users interested in getting started with PowerShell with VMware Infrastructure should have a look at this article by Scott Herold. It’s an excellent place to start.
  • Here’s a list of some of the basic things you should do on a “golden master” template for Windows Server VMs. I actually disagree with #15, preferring instead to let Windows manage the time at the guest OS level. The only other thing I’d add: be sure your VMDK is aligned to the underlying storage. Otherwise, this is a great checklist to follow.

I think that should just about do it for this post. Comments are welcome!

Tags: , , , , , , , , , , ,


  1. Josh Townsend’s avatar

    Thanks for the pingback, Scott. You are correct on there being a vCenter plugin for obtaining free disk space – Rich Garsthagen wrote a plug-in called VCPlus that accomplishes this ( I used it for a while a few years back and had mixed results (some VM’s were just not being reported on). Rich and I emailed back and forth a few times trying to figure it out but I dropped off as new projects demanded time. VCPlus also offers a few other neat options such as showing snapshot status and syncing the DNS name with the VC display name for each guest. Rich also has a sample script for the VI Perl Toolkit called VMDiskFree that can report on guest free space.

  2. TimC’s avatar

    Well, crossbow should make a lot of sense to you. Imagine ESX’s networking stack on crack.

    Virtual swiches, virtual nic’s, only the ability to use the full suite of network tools against those virtual interfaces.

    If you’ve spent any amount of time with Xen, to me that’s one place it is SEVERELY lagging behind vmware. The networking usability IMHO is nowhere near what vmware has. Crossbow is a leapfrog from what I’ve read on it.

    Ben Rockwood has a couple of good posts that may help you out a bit??

  3. Stu’s avatar

    Hi Scott,

    With regards to crossbow, it’s just bringing was has been available in ESX since day dot to Solaris zones. So you’re right in that it seems like nothing new to us, but it is a fairly big addition for Solaris.

  4. David Magda’s avatar

    Re: Crossbow

    Remember that Sun bought the company that made VirtualBox, and is also working on being able to run Solaris as both dom0 (host) and domU (guest) instances in Xen. Also combine that with their use of LDoms and containers, and you have a virtualization stack that runs from heavy (Ldoms), to middle (VB/Xen), to very light weight (zones).

    Given the flexibility of how you can set up guests, you need a network stack that can be configured to connect to upstream switches in a flexible manner.

    It also gives the option to do Layer 2 isolation with-in the same host, which is fairly standard (ESX does this). But you can also do something like QoS (UDP allowed only 10% of bandwidth; HTTPS before anything else; etc.). All the QoS stuff is done in hardware AFAICT, so if you’re trying to do traffic shaping over dual 10 GigE interfaces near saturation, you’re not pegging the CPU doing packet inspection and QoS:

    Just like ZFS features (ZIL logging, L2ARC), DTrace, and the new kernel CIFS server allowed Sun to make a high-end filer out of a bunch of SATA JBODs:

    A lot of this, and other OpenSolaris projects, could mean you could possibly Cisco 6509 replacement from Sun. (They already sell a lot of Infiband stuff for HPC.)

    If you take a look at 7000 series storage, you can see that each controller can have up to 128 GB of non-proprietary RAM (you can buy it from So with two controllers, you can have 256 GB of RAM. And this built on a “low-end” chassis from Sun (T5120). Now take a look at NetApp’s highest end FAS6080, which can have 64 GB per controller. And the I’m guessing the 6080 is A LOT more expensive than the Sun system, and you probably have to buy RAM directly from NetApp–no Kingston for you.

    Idle speculation / brainstorming: so you take that ‘analogy’, and move it over the the networking world. I don’t know know much RAM most routers or switches take, but if Sun builds a switch based off of one of their Niagara processors, imagine having a router with 128+ GB of RAM. That’s a lot of space for routing tables. :)

    Fun times.

  5. slowe’s avatar


    I agree–a lot of the stuff I see coming out of Sun is just great. Unfortunately, they just can’t seem to get any traction on it. ZFS is tremendous technology, and the breadth of virtualization options–as you pointed out–is impressive. I really just wish they could “turn the corner,” as it would, and start seeing some uptake on some of this stuff.

    Thanks for reading!

  6. slowe’s avatar


    That’s what I thought! I was pretty certain that Richard had written something like that. Is it still around, and do people still use it?

  7. David Magda’s avatar

    Yes, Sun is very much an engineering and design company, but unfortunately all their wonderful software and kit isn’t being reflected in profits. I’ve used various things over the last decade, and S10 and their current hardware is some of the best stuff money can buy (and extremely competitively priced).

    I have no idea why they’re having such a rough time.

  8. Vinf’s avatar

    I’ve done a lot of work in the last year with defining a cloud platform reference architecture from an infrastructure point of view with our customers – i.e what bits of tech go where and how do they plug together to try and demistify this kind of thing

    I have a session on this submitted for based around this, the 1st basic version is online here but I will put up the rest once I find out if my session has been accepted or not.

    Some linkage

    In the meantime I’m always interested in discussion around this topic!

  9. slowe’s avatar


    I guess that’s what has me confused–Crossbow is being touted as “revolutionary,” but it doesn’t seem all that fundamentally different than VMware ESX’s networking functionality (save some advanced QoS and that sort of thing). Had I spent more time in the Xen world, I guess it would make much more sense just how revolutionary Crossbow actually is. This isn’t a knock against Crossbow, understand–just coming from where I come from, I was having a hard time getting the picture of why people were so excited about it. Between David Magda and you, I think I have a better picture now. Thanks!

    Oh, and it was Ben’s blog that alerted me to Crossbow in the first place, but thanks for the pointer!

  10. slowe’s avatar

    Vinf (Simon),

    Part of the problem is that there is no clear definition of cloud computing. If you accept that cloud computing is a pooling of resources upon which you run applications, then your reference architecture for a private cloud is right on the money, and it’s something that everyone can pretty easily get their hands around.

    If, on the other hand, you define cloud computing as the delivery of computing power across the Internet (specifically), then your reference architecture doesn’t define that.

    In my mind, the very fact that we don’t have a clear and consistent definition of cloud computing is the largest part of the problem understanding cloud computing.

  11. Nate’s avatar

    Hey Scott,

    Kinda went anti-MS on this one didn’t you. I thought you purported to be a balanced perspective?

    On the manual VM addition, the author says right in his blog “mainly due to live snapshots.” Doing it manual isn’t the supported way from MS, and I don’t know why you would want to. You can export and import the VMs if you so choose. If you are looking for DR type purposes you should look into DPM, not just taking copies of a VHD and trying to manually stuff things back together. DPM can snapshot any of your virtual machines live (using VSS) then you can restore them from there and don’t need to mess with any of this manual stuff. DPM is hyper-v aware so you can just restore the VM to whatever host you choose form whatever poitn in time. You can use DPM2DPM4DR if you want offsite replication. I can see the VMware response coming already with “why should I have to have extra MS stuff to get t job done?” Well you have to have extra stuff to get the same type of functionality out of vmware. I’m not even sure if there is a VSS aware option for vmware to truly get the same functionality of a live perfectly clean snapshot. Besides you are going to want virtual machine manager anyways, and if you are a Microsoft shop you’d also like ops manager and config manager. Well just buy a single system center enterprise management license for the physical boxand it covers unlimited VMs on it for all four products ops manager, VMM, config manager, and DPM.

    On the licensing issue, the report is coming from Vmware, so I don’t immediately take it as codified truth. For the sake of argument however, let’s assume it is correct. I can’t think of any enterprise Microsoft customer that would not have software assurance. You’d have to be absolutely foolish not to. And if you do have software assurance all of your 2k3 CALs became 2k8 CALS the instant 2k8 was available. If you are not keeping software assurance on your Microsoft products I’d have to assume you are a very small shop in which case you can either A) Run hyper-v server for free and not worry about it (you are small so you probably don’t need the higher end features of full hyper-v, aka you’d probably run ESXi if you weren’t running hyper-v). B) Pay to upgrade the few CALS you have. Again this is only assuming that the VMware article about Microsoft licensing is accurate. It could be FUD, especially considering they aren’t even willing to completely own their accusation.

  12. slowe’s avatar

    Hi Nate,

    Actually, if you go back and read my comment, you’ll see that I said I would be very interested in knowing WHY the process seemed so complicated. If it doesn’t seem complicated to you, then fine. Someone with extensive Hyper-V experience–which I’ll be the first to admit I’m not–probably views this as quite straightforward, and VMware’s method seems complicated. I did not go “anti-MS,” but rather merely stated my observation.

    VMware’s process would be a) shutdown the VM; b) move/copy all the VM’s files to the appropriate destination; c) register the VM. AFAIK, that process is the same whether the VM has snapshots or not.

    (Just for the record, by the way, VMware has supported VSS-quiesced VM snapshots since VMware ESX 3.5 Update 2.)

    I don’t take the licensing issue to be codified truth, either, which is why I stated that an independent organization “apparently” verified the requirements. I have not personally verified the requirements, and I haven’t seen any information to the contrary. Having only the information available to me, then, I can only come to one conclusion, and that conclusion is that the licensing requirements are accurate. If you have information to the contrary, please share. I brought it up because I felt like it was a piece of information that users needed, regardless of which virtualization platform they are using and it could affect quite a few users. No organization wants to get surprised with expenses they didn’t expect or forecast, and this seemed like something that could end up that way if users didn’t know.

    I’m not really sure where the comment about DPM, Ops Mgr, Config Mgr, SCVMM came from, as I didn’t touch on them in my post. In my opinion, those products are complementary to either virtualization platform.

    Thanks for your comment!

  13. Nate’s avatar

    I agree Robert’s method is extremely complicated, and also pointless. I guess that was my point. You are not supposed to move VMs that way, so why expect it to be clean and easy. If you want to take a VM configuration and move it from one server to another there is an easy export/import option. All you have to do is 1. shut down the VM 2. right click the vm and select export 3. Move or copy the exported files to the appropriate place 4. Click import virtual machine and select your files. Pretty easy. I like the export/import method because you can get a nice little package of everything related to the VM (you can slo just grab the config if you want). Robert was apparently talking about using his complicated manual method for DR purposes, which is why I talked abou DPM. The appropriate way to handel DR for your hyper-v infrastructure is to backup your VMs with DPM. Then recovery is ultra simplistic, none of this complicated manual stuff. Heck, if it came down to it I’d just created a new VM and point it at the existing VHD before I went through all that Robert is doing.

    I am curious as to what the real story is with licensing. It has no impact for me since I use software assurance, but I am curious now. I dislike the marketing tactic from VMware though. They put it out there with an extreme lack of clarity, but got exactly what they wanted out of it. Blog sites like yours picked it up. Trying to do a quick search for real information on the licensing issue what did I hit? Blogs repeating the VMware claim with their sole source of information being the same VMware page you linked to. I had the same experience trying to get valid information on NIC teaming. Thats really frustrating. At least with NIC teaming it was a technical thing so I could just do it for myself to see what reality was (once I had time). Licensing you can’t just ‘figure out’.

    I brought up the SC suite because I had brought up DPM for use as DR of your hyper-v infrastructure. In other conversations I’ve had with VMware folk I tend to get the “oh see there’s M$ trying to make you spend more money on M$ products” whenever I bring up complimentary products that make life easier for you. I think Microsoft has done some great things in licensing for virtualized environments. The unlimited VMs with datacenter (even if you are using VMware) is great, the unlimited managed VMs with SC enterprise mangement license is great too. VMware wants to be nasty to Microsoft about licensing, but Microsft’s licensing changes have helped VMware quite a bit. Heck they probably could have been nasty and made it so that the datacenter unlimited VMs only counted for hyper-v, but they didn’t.

  14. slowe’s avatar

    Hiya Nate,

    Every vendor–Microsoft included–throws their own fair share of FUD out there. It would be great if every vendor just let their product stand on its own merits, but…alas, that is not the world in which we live.

    Thanks for reading and commenting!

  15. TimC’s avatar


    I’m guessing it’s all a matter of perspective. ZFS was/is also considered “revolutionary” by those same folks.

    I personally say “so it’s WAFL for free minus some features”.

    I can agree with the revolutionary part from their perspective of “but it’s free and open”. That’s entirely true. Having ZFS and Crossbow in the public domain is a huge step forward for open source, and especially so given we don’t know if Sun will be able to turn that corner or not.

    At this point I can’t imagine an open source solaris wouldn’t live on even if Sun did go the way of the dodo (which I also don’t think is going to happen).

    I would imagine though, much like with the storage server 7000, you will see some sort of “vm appliance” out of Sun in 2009 that will give VMware a serious run for it’s money (assuming they can get mind share with it).

  16. Rodos’s avatar

    I have entered the conversation, or should that be continued, “What is the Cloud?”.

    You can read my answer at


  17. Raymond Brighenti’s avatar

    Hi Scott,

    On the time issue, all our VM’s are on our domain, so initally I always went along with the fact the VM’s would sync with the domain controller.
    Then I was told at the VCP training that you should always have it sync with the host (and of course make sure the host is syncing properly), the reason being that if the machine goes idle for say 5 mins, the next time it goes to execute it’ll be 5 mins out, and only sync again when sceduled in Windows, which could be serveral minutes, so the machine would be out of sync. On the other hand if it’s set in VMTools to sync with the host it’ll make sure it’s synced before executing.

    Can anyone shed some light, especially for machines on a Windows domain?



Comments are now closed.