Virtualization Short Take #37

It’s that time again: time for another Virtualization Short Take! My Virtualization Short Takes are quick glances at various news bytes, announcements, useful blog posts, or other items of interest. (By the way, the “short” in “Short Take” does not imply that my post is going to be short, in case anyone was wondering. I’m still long-winded, and I have a lot of things that I find interesting.)

  • Have I mentioned how useful the weekly VMware KB digests are?
  • Frank Denneman has published a couple of really great articles recently. The first discussed how to remove an orphaned Nexus 1000V distributed virtual switch; the second discusses a complex interaction between HP Continuous Access and LUN balancing scripts. Both articles are worth a read.
  • Similarly, Jeremy Waldrop has had a couple of good posts since he managed to get his hands on a Cisco UCS. The first post describes a “Doh!” moment when Jeremy realizes that adding more vNICs to a VMware ESXi instance with the Cisco VIC (aka “Palo”; sorry, Cisco, you’re not going to be escaping that code name any time soon) is really just a matter of specifying them in the service profile. I can certainly see where that’s not immediately intuitive. The other article describes Jeremy’s experience with using vNIC failover. There’s great information in the comments to that article; in particular, be sure not to enable vNIC failover with VMware vSphere. Bad things happen as a result. (OK, maybe not “bad things,” but network connectivity can be adversely affected. You should let VMware vSphere handle the NIC teaming and failover.)
  • Toni Westbrook has a good article on how to move the COS VMDK in VMware ESX 4.0. Key note: this solution is currently unsupported by VMware, so use at your own risk.
  • I’ve mentioned before how various bloggers often have a “masterpiece” post. This isn’t necessarily their most well-written post, but it’s the post that, for whatever reason, is a defining post for them. For me, it’s the ESX/VLANs/NIC teaming article I wrote in 2006. I think Jason Boche might have just come up with his: an in-depth discussion of the vpxd.cfg configuration options. Great information, Jason!
  • In VDI environments, storage capacity is only one aspect of the overall storage equation. Vijay Swami at vEverything takes a pretty balanced view of how two leading storage vendors—EMC and NetApp—address not only storage capacity, but also IOPS. It’s worth a read and again underscores that there is no “one right way” to do things. Different doesn’t necessarily mean better or worse, just different. It’s all about the technology choices. (Disclosure: I work for EMC Corporation.)
  • VDI on local disks, anyone? It’s an interesting discussion point that has its pros and cons. I guess the value of this sort of design really depends upon the business objectives the VDI implementation is trying to fulfill.
  • Is anyone else amused by the abrupt “about face” that Microsoft performed with Hyper-V’s dynamic memory feature? Wow…even I was caught off-guard by how quickly they went from one end of the spectrum to the other. I would rather hear someone say, “You know, we were wrong, and this is a valuable feature after all” than to just flip 180 degrees and start moving in a whole new direction.
  • Speaking of Microsoft and whole new directions…there was a great deal of coverage about Microsoft’s desktop virtualization announcement. I won’t try to delve into the details here; that’s a particular niche that is better served by those who have the time to dedicate to it. If you haven’t seen the news, my good friend Alessandro has a great write-up and there’s the official press release from Microsoft.
  • If you’re interested in getting more information on RemoteFX—which appears, more than anything else, to simply be a set of LAN-only acceleration features for RDP and not an entirely new protocol—this article has good information. You might also have a look at this post about Service Pack 1 for Windows Server 2008 R2, which will enable both RemoteFX as well as the afore-mentioned Dynamic Memory.
  • Continuing along with my little BSD love-fest, I came across this article that describes some strange behavior with CARP that can only be fixed by using link aggregation. The geek in me wants to go test this in a bunch of different scenarios to see if the Nexus 1000V fixes it or something like that, but I doubt that I’ll have the time.
  • This is old news now, but in case you hadn’t heard VMware is licensing technology from Likewise Software for use with the next version of VMware vSphere. This will tighten vSphere’s integration with Active Directory. This is generally good, except that it will render my articles on ESX integration with Active Directory useless.
  • With VMware vSphere 4.0 Update 1, you can now install EMC PowerPath/VE using vCenter Update Manager. This VMware KB article provides the details how it’s done.
  • If you’re using ESXi and want to direct logging data elsewhere via syslog, this VMware KB article describes to configure syslog in ESXi.
  • The ages-old discussion of scale up vs. scale out is revisited again in this blog post. I guess the key takeaway for me is the reminder that while VMware HA does restart workloads automatically, there’s still an outage. If you’re running 50 VMs on a host, you’re still going to have an outage across as many as 50 different applications within your organization. That’s not a trivial event. I think a lot of people gloss over that detail. VMware HA helps, but it’s not the ultimate solution to downtime that people sometimes portray it as.
  • PHD Virtual has released esXpress version 4.0 today. I’ve taken a step back from most product announcements simply because they come too quickly to really keep up with them (unless you’re a madman like David Marshall over at VMBlog.com—my hat’s off to you, David!), but the timing worked out for this one. Go have a look at PHD Virtual’s web site for all the details.
  • Last, but most certainly not least, my esteemed colleague Mike Laverick has completed his updated VMware SRM book, now updated for VMware SRM 4.0. Great work, Mike! I would wish you all financial success with the book, but as you’re giving it away for free (an admirable step, by the way) I guess I’ll just have to wish you all other forms of success!

That does it for me this time around, folks. Thanks for reading (I appreciate it!), and if you have some good information to add please feel free to speak up in the comments.

Tags: , , , , , , ,

  1. Jason Boche’s avatar

    re: Microsoft’s “about face”, they’ve effectively confused some of their customers with their previous stance which stated memory sharing, overcommit, whatever voodoo you want to call it, is “dangerous”.

  2. Duncan’s avatar

    I would recommend to also check my article on Scale Up vs Scale Out as their was a lively discussion in the comments section:
    http://www.yellow-bricks.com/2010/03/17/scale-up/

  3. Nate’s avatar

    Scott and Jason, my understanding of Hyper-V’s upcoming Dynamic memory feature is a different approach than what is currently used by VMware, and is far from an about face on the topic. I believe the stance from Microsoft would still be that how VMware chooses to do it is “dangerous” or at least not recommended. Essentially the difference is VMware is telling all of the VMs they have more memory available to them than the system can really hand out. As the VMs actually try to use the memory they’ve been told they have and overload the hosts actual memory badness can happen. I don’t think VMware would argue that. They’d tell you to plan properly not to get in that situation. The Microsoft solution as I understand it tells the VM it has a certain amount of memory. The host has to have enough memory to meet all of the minimums you’ve set. Then you can have an extra pool of memory available on the host and make policies to dynamically add from that pool to the VM. It requires the VM OS to support hot emmory add. The host is never going to assign more memory than it actually has, so you are never in that “dangerous” land that VMware puts you in. The idea is not to do an overcommit and somehow try to magically pretend like you have more memory than you really have. The idea is to have a reserve pool that can be dynamically added to a VM to boost its performance for a time window where it needs it.

  4. Joe Lawson’s avatar

    Thanks for the roundup. Always interesting to see what others are thinking about.

    One point to make, my company has been using esXpress for over two years now on one host. Writing multiple VCB scripts and all the space required to proxy and store the information has had us rethinking our approach. Not having dedupe technology on our NS-20, we have tested out VDR as well as the other two top backup competitors (vranger, veeam). It seems like PHD really has the right paradigm here, backup everything automatically and you have to exclude otherwise. All the other products you have to make jobs and program it all in their GUI. With esXpress, all the machines are getting backed up. If we need to optimize, just put some commands in the VM’s notes and they are interpreted correctly. I sound like a cheerleader here, but I wanted to point out how great of a product they have. It just works. Perhaps a full review soon.

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>