VMwareHA

You are currently browsing articles tagged VMwareHA.

This is just a quick post inspired by Mike Laverick’s recent “Stupid IT” post, in which he weighed in on the blog discussion between Steve Chambers and I regarding “putting all your eggs in one basket,” the most common argument against high consolidation ratios and, in some cases, against consolidation in general.

Mike’s articles—part 1 and part 2—are excellent articles. The interesting thing here is that, when you really boil it down, my viewpoint is not that far off from both Steve and Mike (spoiler warning: Mike agrees with Steve). In my blog post, I tried to focus less on whether high consolidation ratios are good or bad but instead to focus on whether the high consolidation ratios—and the impact of the design decision to use high consolidation ratios—will satisfy the needs of the business.

I agree with a number of points from Steve’s post. For example, I agree the root cause of an outage is more likely to be human error than hardware outage. I also agree that building redundancy into the infrastructure helps further reduce the possibility of an outage. Mike makes the same argument:

The truth is that hardware and software components are so reliable and redundant they hardly ever fail. In fact, so much availability software is geared towards protecting the server from hardware failure that some of my peers are beginning to question why they even buy SKUs that contain VMware HA.

So if everything is so redundant and so stable, why do people buy VMware HA? Why do people use clustering solutions like Windows Failover Clustering? Why do people use VMware FT or Neverfail or any of the rest of it?

The answer is simple: fear. Businesses are afraid of their applications being unavailable. In some cases, this fear is irrational, and from this perspective I agree wholeheartedly with both Mike and Steve: don’t use the “all my eggs in one basket” argument with me just because it scares you, just because you’re afraid of running all your workloads together.

On the other hand, though, this fear might be justified. What if the application or applications in question are the very lifeblood of the business? If you are an online-only organization, the need for your web site to stay up and accessible is crucial. If the web site is down, lots of money gets lost. In this situation, the fear of being unavailable is justified. It’s not irrational—it’s based on a keen understanding of the needs of the business and the impact of the outage upon the business. And in those cases, where suppressing consolidation ratios is used deliberately in order to satisfy the needs of the business, I’ll accept the “all my eggs in one basket” argument.

Come to me with the “all my eggs in one basket” argument backed by irrational fear and a lack of information, and I’ll argue against it every time. Come to me with the “all my eggs in one basket” argument backed by an understanding of how IT aligns with the business and the impact of an outage on the business, and I’ll listen to—and possibly even agree with—your position. As I and so many others have stated on numerous occasions, don’t pursue high consolidation ratios for the sake of high consolidation ratios. Pursue them because it makes the most sense for the business.

In the end, I guess my point is that both Steve and Mike have missed the point. Not that their viewpoints are irrelevant; quite the opposite! Both of them make very good points that are quite relevant and pertinent to the discussion of “Why not higher consolidation ratios?” Unfortunately, that’s not the question that needs to be asked or answered. The question should be, “What is best for the business?” In that context, putting “all your eggs in one basket” isn’t always the best answer.

Courteous comments welcome!

Tags: , , , ,

With the release of VMware vSphere 4 earlier this year, VMware officially introduced VMware Fault Tolerance (VMware FT), a new mechanism for providing extremely high levels of availability to virtual machine workloads. As I’ve talked with customers, I’ve noticed a growing number of customers who are unaware of the differences between the types of high availability that VMware provides (in the form of VMware HA and VMware FT) and operating system-level clustering (such as Microsoft Windows Failover Clustering). Although both types of technology are intended to increase availability and reduce downtime, they are very different and offer different types of functionality.

Consider these points:

  • While using VMware HA will protect you against the failure of an ESX/ESXi host, VMware HA won’t—by default—protect you against the failure of the guest operating system. An OS-level cluster, on the other hand, does protect against the failure of the guest operating system. +1 for OS-level clustering.
  • VMware clusters that are using VMware HA can choose to use VM Failure Monitoring and gain some level of protection against the failure of the guest operating system, but you still won’t get protection of the specific application within the guest operating system, unlike an OS-level cluster. +1 for OS-level clustering.
  • These same arguments also apply to VMware FT. VMware FT won’t protect you against guest operating system failure—a crash of the OS in the primary VM generally means a crash of the OS in the secondary VM at the same time—and it won’t protect you against application failure. +1 for OS-level clustering.
  • You can’t failover between systems using VMware HA or VMware FT in order to perform OS upgrades or apply OS patches. +1 for OS-level clustering.
  • Similarly, you can’t failover between systems using VMware HA or VMware FT in order to do a rolling upgrade of the application itself. +1 for OS-level clustering.
  • Of course, the VMware technologies do have some advantages. Both VMware HA and VMware FT are far, far simpler to enable and configure than an OS-level cluster. +1 for VMware.
  • Both VMware HA and VMware FT don’t require any application support in order to protect the VM and its workloads. +1 for VMware.
  • Neither VMware HA nor VMware FT require that you license specific editions of the guest operating system or application in order to be able to use their benefits. +1 for VMware.
  • VMware HA can produce higher levels of utilization within a host cluster than using OS-level clustering. +1 for VMware.
  • VMware FT can provide higher levels of availability than what is available in most OS-level clustering solutions today. +1 for VMware.

This is not a knock against any of technologies listed—VMware HA, VMware FT, or OS-level clustering—but rather an exploration of their advantages, disadvantages, similarities, and differences. Hopefully, this will help readers who might not be as familiar with these products make a more informed decision about which technologies to deploy in their data center. (Hint: You’ll probably need all of them.)

Tags: , , , , ,

My irregular “Virtualization Short Takes” series was put on hold some time ago after I started work on Mastering VMware vSphere 4. Now that work on the book is starting to wind down just a bit, I thought it would be a good time to try to resurrect the series. So, without further delay, welcome to the return of Virtualization Short Takes!

  • Trigged by a series of blog posts by Arnim van Lieshout on VMware ESX memory management (Part 1, Part 2, and Part 3), Scott Herold decided to join the fray with this blog post. Both Scott’s post and Arnim’s posts are good reading for anyone interested in getting a better idea of what’s happening “under the covers,” so to speak, when it comes to memory management.
  • Perhaps prompted by my post on upgrading virtual machines in vSphere, a lot of information has come to light regarding the PVSCSI driver. Some are advocating changes to best practices to incorporate the PVSCSI driver, but others seem to be questioning the need to move away from a single drive model (a necessary move since PVSCSI isn’t supported for boot drives). Personally, I just want VMware to support the PVSCSI driver on boot drives.
  • Eric Sloof confirms for us that name resolution is still the Achilles’ Heel of VMware High Availability in VMware vSphere.
  • I don’t remember where I picked up this VMware KB article, but it sure would be handy if VMware could provide more information about the issue, such as what CPUs might be affected. Otherwise, you’re kind of shooting in the dark, aren’t you?
  • Upgraded to VMware vSphere, and now having issues with VMotion? Thanks to VMwarewolf, this pair of VMware KB articles (here and here) might help resolve the issue.
  • Chad Sakac of EMC and co-conspirator for the storage portion of Mastering VMware vSphere 4 (pre-order here), has been putting out some very good posts:
  • Leo Raikhman pointed me to this article about IRQ sharing between the Service Console and the VMkernel. I think I’ve mentioned this issue here before…but after over a 1,000 posts, it’s hard to keep track of everything. In any case, there’s also a VMware KB article on the matter.
  • And speaking of Leo, he’s been putting up some great information too: notes on migrating Ubuntu servers (in turn derived from these notes by Cody at ProfessionalVMware), a rant on CDP support in ESX, and a note about the EMC Storage Viewer plugin. Good work, Leo!
  • If you are interested in a run-down of the storage-related changes in VMware vSphere, check out this post from Stephen Foskett.
  • Rick Vanover notes a few changes to the VMFS version numbers here. The key takeaway here is that no action is required, but you may want to plan some additional tasks after your vSphere upgrade to optimize the environment.
  • In this article, Chris Mellor muses on how far VMware may go in assimilating features provided by their technology partners. This is a common question; many people see the addition of thin provisioning within vSphere as a direct affront to array vendors like NetApp, 3PAR, and others who also provide thin provisioning features in the array themselves. I’m not so convinced that this feature is as competitive as it is complementary. Perhaps I’ll write a post about that in the near future…oh wait, never mind, Chad already did!
  • File this one away in the “VMware-becoming-more-like-Microsoft” folder.
  • My occasional mentions of Crossbow prompted a full-on explanation of the Open Networking functionality of OpenSolaris by a Sun engineer. It kind of looks like SR-IOV and VMDirectPath to me…sort of. Don’t you think?
  • If you are thinking about how to incorporate HP Virtual Connect Flex-10 into your VMware environment, Frank Denneman has some thoughts to share. I’ve been told by HP that I have some equipment en route with which I can do some additional testing (the results of which will be published here, of course!), but I haven’t seen it yet.
  • OK, I guess that should just about do it. Thanks for reading, and please share your thoughts, interesting links, or (pertinent) rants in the comments.

    Tags: , , , , , , , , , ,

    Welcome to Virtualization Short Take #25, the first edition of this series for 2009! Here I’ve collected a variety of articles and posts that I found interesting or useful. Enjoy!

    • We’ll start off today’s list with some Hyper-V links. First up is this article on how to manually add a VM configuration to Hyper-V. It would be interesting to me to know some of the technical details—i.e., the design decisions that led Microsoft to architect things in this way—that might explain why this process is, in my opinion, so complicated. Was it scalability? Manageability? If anyone knows, please share your information in the comments.
    • It looks like this post by John Howard on how to resolve event ID 4096 with Hyper-V is also closely related.
    • This blog post brings to light a clause in Microsoft’s licensing policy that forces organizations to use Windows Server 2008 CALs when accessing a Windows Server 2003-based VM hosted on Hyper-V. In the spirit of disclosure, it’s important to note that this was written by VMware, but an independent organization apparently verified the licensing requirements. So, while you may get Hyper-V at no additional cost (not free) with Windows Server 2008, you’ll have to pay to upgrade your CALs to Windows Server 2008 in order to access any Windows Server 2003-based VMs on those Hyper-V hosts. Ouch.
    • Wrapping up this edition’s Microsoft virtualization coverage is this post by Ben Armstrong warning Hyper-V users about the use of physical disks with VMs. Apparently, it’s possible to connect a physical disk to both the Hyper-V parent partition as well as a guest VM, and…well, bad things can happen when you do that. The unfortunate part is that Hyper-V doesn’t block users from doing this very thing.
    • Daniel Feller asks the question, “Am I the only one who has trouble understanding Cloud Computing?” No, Daniel, you’re not the only one—I’ve written before about how amorphous and undefined cloud computing is. In this post over at the Citrix Community site, Daniel goes on to indicate that cloud computing’s undefined nature is actually its greatest strength:
       

      As I see it, Cloud Computing is a big white board waiting for organizations to make their requirements known. Do you want a Test/QA environment to do whatever? This is cloud computing. Do you want someone to deliver office productivity applications for you? That is cloud computing. Do you want to have all of your MP3s stored on an Internet storage repository so you can get to it from any device? That is also cloud computing.

      Daniel may be right there, but I still insist that there need to be well-defined and well-understood standards around cloud computing in order for cloud computing to really see broad adoption. Perhaps cloud computing is storing my MP3s on the Internet, but what happens when I want to move to a different MP3 storage provider? Without standards, that becomes quite difficult, perhaps even impossible. I’m not the only one who thinks this way, either; check this post by Geva Perry. Until some substance appears in all these clouds, people are going to hold off.

    • Rodney Haywood shared a useful command to use with VMware HA in this post about blades and VMware HA. He points out that it’s a good idea to spread VMware HA primary nodes across multiple blade chassis so that the failure of a single chassis does not take down all the primary nodes. One note about the using the “ftcli” command is that you’ll need to set the FT_DIR environment variable first using “export FT_DIR=/opt/vmware/aam” (assuming you’re using bash as the shell on VMware ESX). Otherwise, the advice to spread clusters across chassis as well as to ensure that primary agents are spread across chassis is advice that should be followed.
    • Joshua Townsend has a good post at VMtoday.com about using PowerShell and SQL queries to determine the amount of free space within guest VMs. As he states in his post, this can often impact the storage design significantly. It seems to me that there used to be a plug-in for vCenter that added this information, but I must be mistaken as I can no longer find it. Oh, and one of Eric Siebert’s top 10 lists also points out a free utility that will provide this information as well.
    • I don’t have a record of where this information turned up, but this article from NetApp (NOW login required) on troubleshooting NFS performance was quite helpful. In particular, it linked to this VMware KB article that provides in-depth information on how to identify IRQ sharing that’s occurring between the Service Console and the VMkernel. Good stuff.
    • Want more information on scaling a VMware View installation? Greg Lato posts a notice about the VMware View Reference Architecture Kit, available from VMware, that provides more information on some basic “building blocks” in creating a large-scale View implementation. I’ve only had the opportunity to skim through the documents thus far, but I like what I’ve seen thus far. Chad also mentions the Reference Architecture Kit on his site as well.
    • Duncan at Yellow Bricks posts yet another useful “in the trenches” post about VMFS-3 heap size. If your VMware ESX server is handling more than 4TB of open VMDK files, then it’s worth having a look at this VMware KB article.
    • The idea of “virtual routing” is an interesting idea, but I share the thoughts of one of the commenters in that technologies like VMotion/XenMotion/live migration may not be able to respond quickly enough to changing network patterns to be effective. Perhaps it’s just my server-centric view showing itself, but it seems more “costly” (in terms of effort) to move servers around to match traffic flow than to just route the traffic accordingly.
    • CrossBow looks quite cool, but I’m having a hard time understanding the real business value. I am quite confident that my lack of understanding about CrossBow is simply a reflection of the fact that I don’t know enough about Solaris Containers or how Xen handles networking, but can someone help me better understand? What is the huge deal with Crossbow?
    • Jason Boche shares some information with us about how to increase the number of simultaneous VMotion operations per host. That information could be quite handy in some cases.
    • I had high hopes for this document on VMFS best practices, but it fell short of my hopes. I was looking for hard guidelines on when to use isolation vs. consolidation, strong recommendations on VMFS volume sizes and the number of VMs to host in a VMFS volume, etc. Instead, I got an overview of what VMFS is and how it works—not what I needed.
    • Users interested in getting started with PowerShell with VMware Infrastructure should have a look at this article by Scott Herold. It’s an excellent place to start.
    • Here’s a list of some of the basic things you should do on a “golden master” template for Windows Server VMs. I actually disagree with #15, preferring instead to let Windows manage the time at the guest OS level. The only other thing I’d add: be sure your VMDK is aligned to the underlying storage. Otherwise, this is a great checklist to follow.

    I think that should just about do it for this post. Comments are welcome!

    Tags: , , , , , , , , , , ,

    I’ve been communicating with a reader who is experiencing random reboots of virtual machines on his HA/DRS-enabled cluster running VMware ESX 3.5 Update 3. At first, I thought his problem was related to the bug with VM failure monitoring that I discussed here, but upon further discussion the random reboots are continuing to occur even when VM failure monitoring is disabled. The only relief the reader has been able to find thus far has been to completely disable VMware HA on his cluster, which—to be honest—is a less than acceptable solution.

    After a little bit of digging around, I turned up this VMware Communities thread, in which several other users also indicate they are seeing the same kinds of problems. The thread closes out by referencing this post by Duncan Epping regarding the VM failure monitoring bug. Clearly, though, this bug should not be affecting users who do not have VM failure monitoring enabled. I also found this blog post about another user having the issue, although it sounds like his problem was solved by disabling VM failure monitoring.

    Further research turned up this KB article on a post-Update 3 patch that may address some of the random reboot issues. Judging from the KB article, it looks like the random reboots may be caused due to an unexpected interaction between VMotion and an option to automatically upgrade VMware Tools. This is just speculation, of course, but the symptoms seem to fit.

    Have any other users out there experienced this problem? If so, what was the fix, if any? It sounds like there may be more to this issue than perhaps I first suspected.

    Tags: , , , , ,

    Via jtroyer on Twitter, I learned of this post comparing Hyper-V and VMware ESX.

    Now, I’ll be the first to admit that I’m a VMware fan, but as others in the virtualization industry know I also recognize that VMware is not a “one size fits all” solution. There are many places where other virtualization solutions, Hyper-V included, may be a better solution for the customer. It really all depends upon the customer’s needs.

    That being said, I do have a few questions for the owner of this particular post:

    • It’s a subtle point, but there is a distinction between “free” and “available at no additional charge”. I take vendors to task for this all the time. Hyper-V isn’t free; it’s available at no additional charge.
    • What in the world is “para metal” virtualization? I’ve heard of bare-metal virtualization (the kind that VMware ESX, Xen, and Hyper-V all perform) and paravirtualization (the kind that VMware ESX and Xen can perform; I don’t think Hyper-V does yet). Is “para metal” virtualization a blend of the two?
    • Identical servers are not required in order to support VMware HA. They are required for VMotion. I would strongly suspect that Hyper-V will have similar requirements or will require hardware support like AMD-V/Intel FlexMigration when it’s live migration feature arrives in 2010.
    • Just because VMware ESX can do memory overcommit doesn’t mean you have to use it. It just gives you the flexibility to use it when you need it.
    • I’m sorry, aren’t Microsoft Windows Server 2008, NTFS, and Windows Failover Clustering every bit as “proprietary” as VMware ESX, VMFS, and VMware HA? Am I missing something here?
    • VMware ESX installs just fine on x64 processors from both AMD and Intel. I have four x64 AMD servers sitting in my lab that are happily running both 32-bit and 64-bit guest operating systems.
    • Since when the hypervisor layer not containing any drivers—i.e., having your I/O drivers reside in the parent partition—have anything to do with direct hardware access by the guest OS? Unless I’m mistaken, these two items have nothing to do with each other. And the jury is still out as to whether having your I/O drivers in the parent partition, an approach used by both Hyper-V and Xen, is really a better approach.

    Did I miss anything?

    UPDATE: VMware blogger Jason Boche has also responded. Good points, Jason!

    Tags: , , , , , , , , , ,

    A problem has been identified with VMware ESX 3.5 Update 3 when using VMware HA and VM failure monitoring. This problem results from a delay in the transmission of a heartbeat from a VM to VMware HA; VMware HA then detects this as a VM failure and restarts the VM. It appears that this problem affects both VMware ESX and VMware ESXi.

    More information on the problem is available in this KB article.

    To fix the problem, users have two options:

    1. Disable virtual machine failure monitoring within the VMware HA cluster.
    2. Reconfigure the host to change the heartbeat delay.

    To reconfigure the host to change the heartbeat delay, follow the steps below:

    1. Disconnect the host from VC (right-click on the host in the VI Client and select “Disconnect”).
    2. Login to the VMware ESX server via SSH and obtain root permissions. Remember that best practices specify not to allow root SSH login, so you’ll need to login as an ordinary user and then use “su -” to become root.
    3. Using a text editor such as nano or vi, edit the file “/etc/vmware/hostd/config.xml” and set the value of heartbeatDelayInSecs to 0, like this:
       
      <vmsvc>
      <heartbeatDelayInSecs>0</heartbeatDelayInSecs>
      </vmsvc>

       
    4. Restart the management agents on the VMware ESX server.
    5. Reconnect the host in VC (right-click on the host in the VI Client and select “Connect”).

    No information is yet available on when this issue will be fixed.

    Tags: , , , ,

    Despite the fact that I’m out of town this week at NetApp Insight, I wanted to go ahead and get out the latest installation of Virtualization Short Takes—my sometimes-weekly collection of interesting or useful links and tidbits.

    • Much ado has been made about VMware’s acquisition of Trango and the announcement of VMware MVP (Mobile Virtualization Platform). Rich Brambley has a great write-up, and I completely agree with Rich and Alex Barrett about what this really means: don’t expect to see Windows XP on your smartphone any time soon. Alex said it best: this is virtualization, not emulation, and Windows XP doesn’t run on ARM.
    • I’m curious—how many people agree with my comments in Alex’s article about the Citrix ICA client for the iPhone. Is there any real, actual value in being able to access a Windows session from your iPhone? I tend to think not, but it would be an interesting discussion. Speak up in the comments.
    • Duncan points out that the issue with adding NICs to a team and keeping them all active—the workaround for which required editing esx.conf—has now been fixed in ESX 3.5 Update 3. It’s now possible to add NICs using esxcfg-vswitch and there’s no need to edit esx.conf. Excellent!
    • If you haven’t yet checked out Leo’s Ramblings, go give it a look. He’s got some good content. It’s worth subscribing to the RSS feed (I did).
    • Rick provides a helpful tool to resolving common system management issues with VMware Infrastructure. Thanks, Rick!
    • Regular readers may recall that Chad Sakac of EMC and I had a round of VMware HA-related posts a few months ago (check out the VMwareHA tag for a full list of VMware HA-related posts). As part of that discussion there was lots of information provided about Service Console redundancy, failover NICs, secondary Service Console connections, additional isolation addresses…all sorts of good stuff. Duncan joined in the conversation as well with a number of great posts, and has been keeping it up since then. His latest contribution to the conversation is a comparison of using failover NICs vs. using a secondary Service Console to prevent VMware HA isolation response. According to the documentation, using a secondary Service Console can help reduce the wait time for VMware HA to step in should isolation actually occur. Good stuff, and definitely worth some additional exploration in the lab.
    • As a sort of follow-up to the discussion about using NFS for VMware, this VMware Communities thread has some great information on why the NFS timeouts should be increased in NetApp NFS environments. If you’re like me, you like to know the reasons behind the recommendations, and this thread was very helpful to me. Let me also add that we’ve recently started recommended to customers to increase their Service Console memory to 800MB when using NFS, so that might be something to consider as well.
    • Need to change the path of where Update Manager stores its patches? Gabe shows you how here.
    • Eric Gray of VCritial explores the question: what would things be like without VMFS? Well, as he states, you can just ask a Hyper-V user, since Hyper-V doesn’t (yet) have a shared cluster filesystem. Yes, that will change in 2010 with Shared Cluster Volumes in Windows Server 2008 R2 and Hyper-V 2.0. I know. Or you can just add Melio FS from Sanbolic today and get the same thing. This is not anything new to me; I discussed this quite extensively here and here. Now, what would really be interesting is for VMware to work with Sanbolic to co-develop a more advanced version of VMFS that eliminates the SCSI reservation problems…
    • Need a nice summary of the various network communications that occur between different components of a VI3 implementation? Look no further than right here. Jason’s site is another one that’s worth adding to your RSS reader.
    • If you really like living on the edge, here’s a collection of some RPMs for VMware ESX 3.5. Keep in mind that installing third-party RPMs like this is not recommended or supported…
    • Andy Leonard picked up the DPM video by VMware and is looking forward to DPM no longer being experimental. What he’d really like, though, is some feature to move his VMs via Storage VMotion and spin down idle disks. Andy, I wouldn’t hold my breath.
    • If you are a Fusion user (if you own a Mac and need to run Windows, you should be a Fusion user!), this hint might come in handy.
    • Eric Siebert has a good post on traffic flow between VMs in various configuration scenarios—different vSwitches, same vSwitches, different port groups, same port groups, etc. Have a look if you are at all unclear how VMware ESX handles traffic flow.

    That does it for this round. Speak up in the comments if you have any interesting or useful links to share with other readers. I’d also be interested in readers’ thoughts on the whole Citrix on the iPhone discussion—will it really bring any usefulness?

    Tags: , , , , , , , ,

    This week’s Short Take is a collection of links and articles that I’ve seen over the last few weeks (or longer ago, in some cases!) that I thought others might find interesting or useful. Enjoy!

    • Alessandro broke the news to the general public about some anticipated new virtualization features that are expected to make their debut in Windows Server 2008 R2, expected sometime in 2010. Microsoft announced live migration for Hyper-V back at the beginning of September, so that part was already known. Now coming from Alessandro’s article is the announcement that Microsoft is developing a cluster file system, similar to VMFS, called Cluster Shared Volumes (CSV). Personally, this wasn’t a big surprise to me as a contact of mine leaked this to me a while ago. Hopefully this won’t hit Sanbolic too hard, whose Melio FS and Kayo FS solutions were intended to fill this gap (as discussed here and here).
    • As fully expected, VMware and Microsoft trade lots of barbs back and forth about VMware ESX vs. Hyper-V and vice versa. Out of the various exchanges, I found the “Too Dry and Crunchy” exchange—now quite old, having been published back at the end of September—the most entertaining. It started here with a barb from VMware about how Hyper-V with Server Core, the recommended configuration from Microsoft for virtualization hosts, is “not the Windows you know.” They compared Hyper-V on Server Core to ESXi and, not surprisingly, found ESXi to be easier and faster to install. What was really surprising though, was the response from James O’Neill in which he essentially agreed: Server Core isn’t “the Windows you know.” While he does love Server Core, James also recognizes that Server Core is not the right fit for every workload, and that management processes and procedures may need to change when using Server Core. Personally, I’m glad to see James recognizing and being honest about the limitations (or caveats) of Server Core. If only all vendors were so honest about their own products…one day, perhaps.
    • Duncan points out a great PDF on the definitions of various memory statistics. Readers may find that useful in understanding the various counters within VirtualCenter.
    • This VMware KB article outlines a potential VMware HA problem with multiple Service Console interfaces.
    • Andy Leonard picked up this VMware KB article that I bookmarked via Delicious.com and discussed how VMware’s recommendations and NetApp’s recommendations seem to run counter to each other. Personally, I’m inclined to follow VMware’s recommendations after the little snafu with NetApp’s NFS file locking suggestion.
    • This is a cool article on the use of ZFS and iSCSI to create clones in storage instead of at the virtualization layer. This is interesting because it’s being done with Solaris and ZFS, but it’s functionally equivalent to FlexClones with NetApp, which I’ve discussed before (see here, here, and here). Accordingly, ZFS clones will suffer from all the same limitations as NetApp FlexClones.
    • And while we’re on the topic of Sun and NetApp, what’s the deal with the recent patent rulings in the ZFS vs. WAFL lawsuit? If I’m reading this update correctly, it looks like some of the core WAFL patents from NetApp are being invalidated. Is Sun going to win this thing?

    That does it for now. Thanks for reading!

    Tags: , , , , , , , , , , ,

    Virtualization Short Take #19 is here, with news, headlines, and commentary on a few things that have passed my way over the last few weeks. Feel free to share your own interesting tidbits in the comments!

    • Rick Blythe, aka VMwarewolf, regales us with a tale of a new “feature” in VirtualCenter 2.5 Update 2. This new feature prevents VM migrations when putting a VMware ESX server into maintenance mode if it will violate the VMware HA failover level. Users will need to disable VMware HA in order to restore the pre-Update 2 functionality. (Anyone know if this has been addressed in VirtualCenter 2.5 Update 3?)
    • This VMware KB article is quite interesting; note the excerpt under the “Purpose” section of the article:
       

      VMware recommends storing your swap on a VMFS3 volume, when running virtual machines on NFS storage.

      So, even when running your VMs on NFS, VMware still recommends running the VM swap files on a VMFS3 volume. Very interesting, indeed. This is particularly interesting to NetApp, who—some would say rightly so—heavily pushes NFS for VMware storage.

    • Also from VMwarewolf, a note about guest customization failing with VirtualCenter 2.5 Update 2. Again, anyone know if this has been addressed with Update 3?
    • Microsoft has released a hotfix for Hyper-V failover clustering that improves functionality and adds VM control. An article at Hypervoria initially alerted me to this hotfix; a full Microsoft KB article is also available.
    • Ben Armstrong aka Virtual PC Guy provides a couple of scripts—in VBScript and PowerShell—for creating a dynamic VHD file.
    • Again via Ben, it looks as if Symantec has added support for Hyper-V to Backup Exec 12.5. That’s interesting, because I asked about Hyper-V support back in June at Tech-Ed and was told “as market demands dictate”. Is market demand now dictating?
    • Duncan alerts us to a potential failure of Storage VMotion after changing the Service Console IP address. I personally haven’t seen this behavior, but the fix is handy to know just in case.
    • Looking for a way to make mass changes to some VMX files? Luckily for you, the VI Toolkit blog has some information that you might find useful.
    • Brian Madden has a good overview of “View Composer” as it was described a few weeks ago at VMworld. Toward the end of the article, Brian mentions that VMware hasn’t announced anything with regard to user profiles:
       

      While that sounds noble, it also is a bit at odds with the longer-term vision that VMware CEO Paul Maritz outlined in the VMworld keynote, namely, that VMware wants to focus on deploying a personality to a user, not to a device. Certainly View Composer goes a long way in centrally managing desktops, but I wouldn’t be surprised if VMware does more in the user personalization space in the future as well.

      As Warren Ponder points out in the comments, there are several ways to handle user profiles, and it does sound like VMware already has some irons in the fire to help address that particular concern.

    • This has been pointed out in numerous places around the web, but who can fault one more link? Mike DiPetrillo of VMware has re-created Hyper-V’s Quick Migration functionality using PowerShell. I could go somewhere with this, but I think I’ll just leave it alone.

    I guess that will do it for this time around. Again, feel free to share links or tidbits you found interesting in the comments below.

    Tags: , , , , , ,

    « Older entries