You are currently browsing articles tagged Citrix.

Welcome to Technology Short Take #23, another collection of links and thoughts related to data center technologies like networking, storage, security, cloud computing, and virtualization. As usual, we have a fairly wide-ranging collection of items this time around. Enjoy!


  • A couple of days ago I learned that there are a couple open source implementations of LISP (Locator/ID Separation Protocol). There’s OpenLISP, which runs on FreeBSD, and there’s also a project called LISPmob that brings LISP to Linux. From what I can tell, LISPmob appears to be a bit more focused on the endpoint than OpenLISP.
  • In an earlier post on STT, I mentioned that STT’s re-use of the TCP header structure could cause problems with intermediate devices. It looks like someone has figured out how to allow STT through a Cisco ASA firewall; the configuration is here.
  • Jose Barreto posted a nice breakdown of SMB Multichannel, a bandwidth-enhancing feature of SMB 3.0 that will be included in Windows Server 2012. It is, unexpectedly, only supported between two SMB 3.0-capable endpoints (which, at this time, means two Windows Server 2012 hosts). Hopefully additional vendors will adopt SMB 3.0 as a network storage protocol. Just don’t call it CIFS!
  • Reading this article, you might deduce that Ivan really likes overlay/tunneling protocols. I am, of course, far from a networking expert, but I do have to ask: at what point does it become necessary (if ever) to move some of the intelligence “deeper” into the stack? Networking experts everywhere advocate the “complex edge-simple core” design, but does it ever make sense to move certain parts of the edge’s complexity into the core? Do we hamper innovation by insisting that the core always remain simple? As I said, I’m not an expert, so perhaps these are stupid questions.
  • Massimo Re Ferre posted a good article on a typical VXLAN use case. Read this if you’re looking for a more concrete example of how VXLAN could be used in a typical enterprise data center.
  • Bruce Davie of Nicira helps explain the difference between VPNs and network virtualization; this is a nice companion article to his colleague’s post (which Bruce helped to author) on the difference between network virtualization and software-defined networking (SDN).
  • The folks at Nicira also collaborated on this post regarding software overhead of tunneling. The results clearly favor STT (which was designed to take advantage of NIC offloading) over GRE, but the authors do admit that as “GRE awareness” is added to the cards that protocol’s performance will improve.
  • Oh, and while we’re on the topic of SDN…you might have noticed that VMware has taken to using the term “software-defined” to describe many of the services that vSphere (and related products) provide. This includes the use of software-defined networking (SDN) to describe the functionality of vSwitches, distributed vSwitches, vShield, and other features. Personally, I think that the term software-based networking (SBN) is far more applicable than SDN to what VMware does. It is just me?
  • Brad Hedlund wrote this post a few months ago, but I’m just now getting around to commenting about it. The gist of the article—forgive me if I munge it too much, Brad—is that the use of open source software components might dramatically change the shape/way/means in which networking protocols and standards are created and utilized. If two components are communicating over the network via open source components, is some sort of networking standard needed to avoid being “proprietary”? It’s an interesting thought, and goes to show the power of open source on the IT industry. Great post, Brad.
  • One more mention of OpenFlow/SDN: it’s great technology (and I’m excited about the possibilities that it creates), but it’s not a silver bullet for scalability.


  • I came across this interesting post on a security attack based on VMDKs. It’s quite an interesting read, even if the probability of being able to actually leverage this attack vector is fairly low (as I understand it).


  • Chris Wahl has a good series on NFS with VMware vSphere. You can catch the start of the series here. One comment on the testing he performs in the “Same Subnet” article: if I’m not mistaken, I believe the VMkernel selection is based upon which VMkernel interface is listed in the first routing table entry for the subnet. This is something about which I wrote back in 2008, but I’m glad to see Chris bringing it to light again.
  • George Crump published this article on using DCB to enhance iSCSI. (Note: The article is quite favorable to Dell, and George discloses an affiliation with Dell at the end of the article.) One thing I did want to point out is that—if I recall correctly—the 802.1Qbb standard for Priority Flow Control only defines a single “no drop” class of service (CoS). Normally that CoS is assigned to FCoE traffic, but in an environment without FCoE you could assign it to iSCSI. In an environment with both, that could be a potential problem, as I see it. Feel free to correct me in the comments if my understanding is incorrect.
  • Microsoft is introducing data deduplication in Windows Server 2012, and here is a good post providing an introduction to Microsoft’s deduplication implementation.
  • SANRAD VXL looks interesting—anyone have any experience with it? Or more detailed technical information?
  • I really enjoyed Scott Drummonds’ recent storage performance analysis post. He goes pretty deep into some storage concepts and provides real-world, relevant information and recommendations. Good stuff.

Cloud Computing/Cloud Management

  • After moving CloudStack to the Apache Software Foundation, Citrix published this discourse on “open washing” and provides a set of questions to determine the “openness” of software projects with which you may become involved. While the article is clearly structured to favor Citrix and CloudStack, the underlying point—to understand exactly what “open source” means to your vendors—is valid and worth consideration.
  • Per the AWS blog, you can now export EC2 instances out of Amazon and into another environment, including VMware, Hyper-V, and Xen environments. I guess this kind of puts a dent in the whole “Hotel California” marketing play that some vendors have been using to describe Amazon.
  • Unless you’ve been hiding under a rock for the past few weeks, you’ve most likely heard about Nick Weaver’s Razor project. (If you haven’t heard about it, here’s Nick’s blog post on it.) To help with the adoption/use of Razor, Nick also recently announced an overview of the Razor API.


  • Frank Denneman continues to do a great job writing solid technical articles. The latest article to catch my eye (and I’m sure that I missed some) was this post on combining affinity rule types.
  • This is an interesting post on a vSphere 5 networking bug affecting iSCSI that was fixed in vSphere 5.0 Update 1.
  • Make a note of this VMware KB article regarding UDP traffic on Linux guests using VMXNET3; the workaround today is using E1000 instead.
  • This post is actually over a year old, but I just came across it: Luc Dekens posted a PowerCLI script that allows a user to find the maximum IOPS values over the last 5 minutes for a number of VMs. That’s handy. (BTW, I have fixed the error that kept me from seeing the post when it was first published—I’ve now subscribed to Luc’s blog.)
  • Want to use a Debian server to provide NFS for your VMware environment? Here is some information that might prove helpful.
  • Jeremy Waldrop of Varrow provides some information on creating a custom installation ISO for ESXi 5, Nexus 1000V, and PowerPath/VE. Cool!
  • Cormac Hogan continues to pump out some very useful storage-focused articles on the official VMware vSphere blog. For example, both the VMFS locking article and the article on extending an EagerZeroedThick disk were great posts. I sincerely hope that Cormac keeps up the great work.
  • Thanks to this Project Kronos page, I’ve been able to successfully set up XCP on Ubuntu Server 12.04 LTS. Here’s hoping it gets easier in future releases.
  • Chris Colotti takes on some vCloud Director “challenges”, mostly surrounding vShield Edge and vCloud Director’s reliance on vShield Edge for specific networking configurations. While I do agree with many of Chris’ points, I personally would disagree that using vSphere HA to protect vShield Edge is an acceptable configuration. I was also unable to find any articles that describe how to use vSphere FT to protect the deployed vShield appliances. Can anyone point out one or more of those articles? (Put them in the comments.)
  • Want to use Puppet to automate the deployment of vCenter Server? See here.

I guess it’s time to wrap up now, lest my “short take” get even longer than it already is! Thanks for reading this far, and I hope that I’ve shared something useful with you. Feel free to speak up in the comments if you have questions, thoughts, or clarifications.

Tags: , , , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #17, another of my irregularly-scheduled collections of various data center technology-related links, thoughts, and comments. Here’s hoping you find something useful!


  • I think it was J Metz of Cisco that posted this to Twitter, but this is a good reference to the various 10 Gigabit Ethernet modules.
  • I’ve spoken quite a bit about stretched clusters and their potential benefits. For an opposing view—especially regarding the use of stretched clusters as a disaster avoidance solution—check out this article. It’s a nice counterpoint, especially from the perspective of the network.
  • Anyone know anything about sFlow?
  • Here’s a good post on VXLAN that has some useful information. I’d just like to point out that VXLAN is really only intended to address Layer 2 communications “within” a vApp or a collection of VMs (perhaps a single organization’s VMs), and doesn’t do anything to address Layer 3 routing/accessibility for clients (or “consumers”) attempting to connect to those systems. For that, you’ll still need—at least today—technologies like OTV, LISP, and others.
  • A quick thought that I’m still exploring: what’s the impact of OpenFlow on technologies like VXLAN, NVGRE, and others? Does SDN eliminate the need for these technologies? I’d be curious to hear your thoughts.

Servers/Operating Systems

  • If you’ve adopted Mac OS X Lion 10.7, you might have noticed some problems connecting to older servers/NAS devices running AFP (AppleTalk Filing Protocol). This Apple KB article describes a fix. Although I’m running Snow Leopard now, I was running Lion on a new MacBook Pro and I can attest that this fix does work.
  • This Microsoft KB article describes how to extend the Windows Server 2008 evaluation period. I’ve found this useful for Windows Server 2008 instances in the lab that I need for longer 60 days but that I don’t necessarily want to activate (because they are transient).


  • Jason Boche blogged about a way to remove stubborn hosts from Unisphere. I’ve personally never seen this problem, but it’s nice to know how to address it should it occur.
  • Who would’ve thought that an HDD could serve as a cache for an SSD? Shouldn’t it be the other way around? Normally, that would probably be the case, but as described here there are certain instances and ways in which using an HDD as a cache for an SSD can improve performance.
  • Scott Drummonds wraps up his 3 part series on flash storage in part 3, which contains information on sizing flash storage. If you haven’t been reading this series, I’d recommend giving it a look.
  • Scott also weighs in on the flash as SSD vs. flash on PCIe discussion. I’d have to agree that interfaces are important, and the ability of the industry to successfully leverage flash on the PCIe bus is (today) fairly limited.
  • Henri updated his VNXe blog series with a new post on EFD and RR performance. No real surprises here, although I do have one question for Henri: is that your car in the blog header?


  • Interested in setting up host-only networking on VMware Fusion 4? Here’s a quick guide.
  • Kenneth Bell offers up some quick guidelines on when to deploy MCS versus PVS in a XenDesktop environment. MCS vs. PVS is a topic of some discussion on the vSpecialist mailing list as they have very different IOPs requirements and I/O profiles.
  • Speaking of VDI, Andre Leibovici has two articles that I wanted to point out. First, Andre does a deep dive on Video RAM in VMware View 5 with 3D; this has tons of good information that is useful for a VDI architect. (The note about the extra .VSWP overhead, for example, is priceless.) Andre also has a good piece on VDI and Microsoft Outlook that’s worth reading, laying out the various options for Outlook-related storage. If you want to be good at VDI, Andre is definitely a great resource to follow.
  • Running Linux in your VMware vSphere environment? If you haven’t already, check out Bob Plankers’ Linux Virtual Machine Tuning Guide for some useful tips on tuning Linux in a VM.
  • Seen this page?
  • You’ve probably already heard about Nick Weaver’s new “Uber” tool, a new VM alignment tool called UBERAlign. This tool is designed to address VM alignment, a problem with how guest file systems are formatted within a VMDK. For more information, see Nick’s announcement here.
  • Don’t disable DRS when you’re using vCloud Director. It’s as simple as that. (If you want to know why, read Chris Colotti’s post.)
  • Here’s a couple of great diagrams by Hany Michael on vCloud Director management pods (both public cloud and private cloud management).
  • People automatically assume that “virtualization” means consolidating multiple workloads onto a single physical server. However, virtualization is really just a layer of abstraction, and that layer of abstraction can be used in a variety of ways. I spoke about this in early 2010. This article (written back in March of 2011) by Brad Hedlund picks up on that theme to show another way that virtualization—or, as he calls it, “inverse virtualization”—can be applied to today’s data centers and today’s applications.
  • My discussion on the end of the infrastructure engineer generated some conversations, which is good. One of the responses was by Aaron Sweemer in which he discusses the new (but not new) “data layer” and expresses a need for infrastructure engineers to be aware of this data layer. I’d agree with a general need for all infrastructure engineers to be aware of the layers above them in the stack; I’m just not convinced that we all need to become application developers.
  • Here’s a great post by William Lam on the missing piece to creating your own vSEL cloud. I’ll tell you, William blogs some of the coolest stuff…I wish I could dig in as deep as he does in some of this stuff.
  • Here’s a nice look at the use of PowerCLI to help with the automation of DRS rules.
  • One of my projects for the upcoming year is becoming more knowledgeable and conversant with the open source Xen hypervisor and Citrix XenServer. I think that the XenServer Design Handbook is going to be a useful resource for that project.
  • Interested in more information on deploying Oracle databases on vSphere? Michael Webster, aka @vcdxnz001 on Twitter, has a lengthy article with lots of information regarding Oracle on vSphere.
  • This VMware KB article describes how to enable centralized logging for vCloud Director cells. This is particularly important for HA environments, where VMware’s recommended HA strategy involves the use of multiple vCD cells.

I guess I should wrap it up here, before this post gets any longer. Thanks for reading this far, and feel free to speak up in the comments!

Tags: , , , , , , , , , , , , , ,

Welcome to Technology Short Take #9, the last Technology Short Take for 2010. In this Short Take, I have a collection of links and articles about networking, servers, storage, and virtualization. Of note this time around: some great DCI links, multi-hop FCoE finally arrives (sort of), a few XenServer/XenDesktop/XenApp links, and NTFS defragmentation in the virtualized data center. Here you go—enjoy!


  • Brad Hedlund has a great post discussing Nexus 7000 connectivity options for Cisco UCS. I’ll include it in this section since it focuses more on the networking aspect rather than UCS. I haven’t had the time to read the full PDF linked in Brad’s article, but the other topics he discusses in the post—FabricPath networks, F1 vs. M1 linecards, and FCoE connectivity—are great discussions. I’m confident the PDF is equally informative and useful.
  • This UCS-specific post describes how northbound Ethernet frame flows work. Very useful information, especially if you are new to Cisco UCS.
  • Data Center Interconnect (DCI) is a hot topic these days considering that it is a key component of long-distance vMotion (aka vMotion at distance). Ron Fuller (who I had the pleasure of meeting in person a few weeks ago, great guy), aka @ccie5851 on Twitter and one of the authors of NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures (available from Amazon), wrote a series on the various available DCI options such as EoMPLS, VPLS, A-VPLS, and OTV. If you’re considering DCI—especially if you’re a non-networking guy and need to understand the impact of DCI on the networking team—this series of articles is worth reading. Part 1 is here and part 2 is here.
  • And while we are discussing DCI, here’s a brief post by Ivan Pepelnjak about DCI encryption.
  • This post was a bit deep for me (I’m still getting up to speed on the more advanced networking topics), but it seemed interesting nevertheless. It’s a how-to on redistributing routes between VRFs.
  • Optical or twinax? That’s the question discussed by Erik Smith in this post.
  • Greg Ferro also discusses cabling in this post on cabling for 40 Gigabit and 100 Gigabit Ethernet.


  • As you probably already know, Cisco released version 1.4 of the UCS firmware. This version incorporates a number of significant new features: support for direct-connected storage, support for incorporating C-Series rack-mount servers into UCS Manager (via a Nexus 2000 series fabric extender connected to the UCS 61×0 fabric interconnects), and more. Jeremy Waldrop has a brief write-up that lists a few of his favorite new features.
  • This next post might only be of interest to partners and resellers, but having been in that space before joining EMC I fully understand the usefulness of having a list of references and case studies. In this case, it’s a list of case studies and references for Cisco UCS, courtesy of M. Sean McGee (who I hope to meet in person in St. Louis in just a couple of weeks).



  • Using XenServer and need to support multicast? Look to this article for the information on how to enable multicast with XenServer.
  • A couple of colleagues over at Intel (I worked with Brian on one of his earlier white papers) forwarded me the link to their latest Ethernet virtualization white paper, which discusses the use of 10 Gigabit Ethernet with VMware vSphere. You can find the link to the latest paper in this blog entry.
  • Bhumik Patel has a good write-up on the “behind-the-scenes” technical details that went into the Cisco-Citrix design guides around XenDesktop/XenApp on Cisco UCS. Bhumik provides the details on things like how many blades were using in the testing, what the configuration of the blades was, and what sort of testing was performed.
  • Thinking of carving your storage up into guest OS datastores for VMware? You might want to read this first for some additional considerations.
  • I know that this has seen some traffic already, but I did want to point out Eric Sloof’s post on the Xenoss XenPack for ESXTOP. I haven’t had the opportunity to use it yet, but would certainly love to hear from anyone who has. Feel free to share your experiences in the comments.
  • As is usually the case, Duncan Epping has had some great posts over the last few weeks. His post on shares set on resource pools highlights the need to adjust the shares value (and other resource constraints) based on the contents of the pool, something that many people forget to do. He also provides a breakdown of the various vCenter memory statistics, and discusses an issue with binding a Provider vDC directly to an ESX/ESXi host.
  • PowerCLI 4.1.1 has some improvements for VMware HA clusters which are detailed in this VMware vSphere PowerCLI Blog entry.
  • Frank Denneman has three articles which have caught my attention over the last few weeks. (All his stuff is good, by the way.) First is his two-part series on the impact of oversized virtual machines (part 1 and part 2). Some of the impacts Frank discusses include memory overhead, NUMA architectures, shares values, HA slot size, and DRS initial placement. Apparently a part 3 is planned but hasn’t been published yet (see some of the comments in part 2). Also worth a read is Frank’s recent post on node interleaving.
  • Here’s yet another tool in your toolkit to help with the transition to ESXi: a post by Gabe on setting logfile location, swap file, SNMP, and vmkcore partition in ESXi.
  • Here’s another guide to creating a bootable ESXi USB stick (on Windows). Here’s my guide to doing it on Mac OS X.
  • Jon Owings had an idea about dynamic cluster pooling. This is a pretty cool idea—perhaps we can get VMware to include it in the next major release of vSphere?
  • Irritated that VMware disabled copy-and-paste between the VM and the vSphere Client in vSphere 4.1? Fix it with these instructions.
  • This white paper on configuration examples and troubleshooting for VMDirectPath was recently released by VMware. I haven’t had the chance to read it yet, but it’s on my “to read” list. I’ll just have a look at that in my copious free time…
  • David Marshall has posted on a two-part series on how NTFS causes I/O bottlenecks on virtual machines (part 1 and part 2). It’s a great review of NTFS and how Microsoft’s file system works. Ultimately, the author of the posts (Robert Nolan) sets the readers up for the need for NTFS defragmentation in order to reduce the I/O load on virtualized infrastructures. While I do agree with Mr. Nolan’s findings in that regard, there are other considerations that you’ll also want to include. What impact will defragmentation have on your storage array? For example, I think that NetApp doesn’t recommend using defragmentation in conjunction with their storage arrays (I could be wrong; can anyone confirm?). So, I guess my advice would be to do your homework, see how defragmentation is going to affect the rest of your environment, and then proceed from there.
  • Microsoft thinks that App-V should be the most important tool in your virtualization tool belt. Do you agree or disagree?
  • William Lam has instructions for how to identify the origin of a vSphere login. This might not be something you need to do on a regular basis, but when you do need to do it you’ll be thankful you have the instructions how.

I guess it’s time to wrap up now, since I have likely overwhelmed you with a panoply of data center-related tidbits. As always, I encourage your feedback, so please feel free to speak up in the comments. Thanks for reading!

Tags: , , , , , , , , , , ,

It has been a busy week since I last posted, and the blogging/micro-blogging world has been quite busy. I’ve gathered quite the collection of links and posts over the last week or so; here are a few that caught my eye. Welcome to Virtualization Short Take #41!

  • About a month ago Rick Vanover posted a quick note about the use of Disk.SchedNumReqOutstanding as a potential performance tweak. As Rick mentions at the end of his article, it’s important to “test before and after with an intensive workload”; otherwise, you could find yourself actually hurting performance. Like so many performance tweaks, it really depends upon your specific environment and your specific workloads. Definitely refer to some of the linked resources at the end of Rick’s article (Duncan’s stuff is always helpful) for more details.
  • Speaking of Duncan, his post on vCPU limits is a great read and helps dispel a common misconception about vCPU limits. Remember the definition of a hertz folks—300 million cycles per second (300MHz) means 300 million cycles per second. It doesn’t mean 500 million cycles per second, or 700 million cycles per second.
  • Frank Denneman’s article on memory reservations and resource pools is also a really good read.
  • Kenneth van Ditmarsch has a good post on using datastore permissions to help ensure that VMs are properly placed based on SLAs. This is the kind of operational advice that I think many organizations still need.
  • Continuing our theme of resource allocation, here’s a good post on the effect of shares.
  • If you’re interested in an early look at some of the features targeted for inclusion in VMware View 4.5, have a look at Matthijs Haverink’s post on View 4.5 expected features. If Matthijs’ information is accurate, it looks like VMware has some good stuff planned.
  • I had a URL in my Yojimbo collection for part 5 of the series on Hyper-V dynamic memory, but it doesn’t seem to work anymore. I think the blog post was pulled. If anyone has a working link (yes, I’ve already checked Google), feel free to post it in the comments.
  • Jeremy Waldrop of Varrow brings to light a potential issue with the Cisco “Palo” adapter (now called the Virtual Interface Controller, or VIC) and PowerPath/VE. There is a workaround that fixes the problem. It’s important to note that the Cisco VIC isn’t fully vetted or validated for Vblock yet; that’s still in progress.
  • As a follow up from my mention of this issue in VST #40, I have more information on the Changed Block Tracking (CBT) issue. This post from VMware has more information on the specific conditions needed to produce the problem. I have to say, it looks like a pretty specific set of circumstances. I’m curious to know your thoughts: is this a corner case, or a really significant problem? Personally, I’m leaning toward the former.
  • EMC virtual appliances are really taking off; Chad unwrapped the FMA virtual appliance and fellow vSpecialist team member Nick Weaver unveiled v2 of the “Uber” Celerra VSA as well. I haven’t had the chance to play with the FMA virtual appliance yet, but I’m traveling tonight so maybe I’ll mess around with it on my laptop tonight from the hotel. (Yes, I’m a geek. What can I say?)
  • Following Citrix’s announcement of XenClient, their bare metal client hypervisor, and VMware’s response that perhaps the bare metal client hypervisor’s use cases are more limited than many might think, Citrix has responded by explaining XenClient to VMware. Bare metal hypervisors, unmanaged type 2 hypervisors, and policy-managed type 2 hypervisors all have value in the desktop virtualization space. Perhaps VMware should write a response to Citrix explaining the idea behind check-out/check-in of policy-controlled VMs? While I’m sure that I won’t be very popular with VMware for saying this, I do have to agree with Citrix here: discounting the value of bare metal client hypervisors on the basis of a single use case is a bit disingenuous, especially when you’ve been promoting client hypervisors for a while.
  • Looking to stay sharp and stay relevant in today’s changing IT landscape? Mike DiPetrillo offers some suggestions for skills that IT folks should embrace.
  • Kevin Goodman shared some information here on consolidation ratios with his Cisco UCS environment. He admits he is constrained by RAM, which is common in many data centers today. There are two answers to that problem today: full-width UCS blades with support for massive amounts of RAM; or expensive, high-capacity RAM modules to drive memory capacity higher. It also looks like the Nehalem EX chipset is going to help address that problem with support for more memory buses and more memory slots. Once again I find it interesting that virtualization is helping to drive hardware development.
  • Forbes Guthrie has published v5 of his connections and ports diagram for VMware ESX/ESXi. Definitely a useful resource!
  • This VMware KB article helps clarify the behavior of TPS with Intel Xeon 5500 (Nehalem)-based systems. This isn’t new information (I believe Duncan might have pointed it out first?), but it’s nice to see clarification of the behavior.
  • OK, I’m probably showing my ignorance here (I haven’t had the opportunity to spend as much time with View Manager as I would like), but who knew View Manager had a command-line tool?

I guess that will wrap things up for this issue of Virtualization Short Takes. I hope you’ve found something useful!

Tags: , , , , , , ,

Yesterday, Alex Barrett of TechTarget posted a tweet about her predictions for 2010:

No wishy washy 2010 predictions from me: VMware will cut prices, and Citrix will give up on XenServer:

The link in the tweet corresponds to this TechTarget article in which Alex predicts that VMware will cut prices and Citrix will dump XenServer and focus instead on its management products like Citrix Essentials. Citrix Essentials, as you probably know, already supports Microsoft Hyper-V. Alex’s prediction is not an unusual one; others have made this prediction before. Quite honestly, based on the progress we are seeing on XenServer’s development, I can see the logic behind Alex’s prediction.

Then along comes Simon Crosby and posts a rebuttal to Alex’s prediction, citing XenServer’s growth, industry partnerships, and projected development goals. OK, that’s fine and all; I would certainly expect Simon to be an ardent supporter of Xen and XenServer. I don’t take issue with his rebuttal; what I take issue with is this statement:

I think I’ve concluded that there are a few people whose predictions about the future I will never believe. They are precisely those who are compensated based on clicks and not insight, and who seldom take the time to check for data or accuracy.

To prevent any question of the individual about whom he was speaking, Simon added a hyperlink (recreated in the quote above) to point to Alex Barrett’s author page at TechTarget.

Ouch—that’s a bit harsh, don’t you think? It’s just bad form to say something like that about someone. First of all, a prediction isn’t exactly something you can “check for data or accuracy”; it’s a prediction. No one, including me, begrudges any vendor from defending itself. But there are ways of defending yourself without personally attacking others. There are ways to disagree respectfully and courteously. There are those out there that might want to try this approach.

Tags: , ,

I was reading a completely unrelated post on Alessandro’s site this morning about how VKernel is reacting to VMware’s release of CapacityIQ when a thought occurred to me: is VMware legitimizing the competition?

Here’s the excerpt from Alessandro’s post that started me thinking:

And of course VKernel now is also in hurry to clarify that support for Microsoft Hyper-V and Citrix XenServer is coming.

Now, let me ask you this question: what is one of the largest complaints about products like Microsoft Hyper-V and Citrix XenServer? It’s the size of the partner ecosystem. Customers are a bit more hesitant to deploy these other solutions in part because there aren’t as many partner solutions out there to complement the virtualization solutions.

So, as VMware expands into new markets like capacity management and monitoring, backups, etc., former VMware-only partners are forced to adapt their products to work with Hyper-V and XenServer in order to protect themselves. This causes the size of the partner ecosystem for VMware’s competitors to grow, eliminating that complaint and removing one of VMware’s competitive advantages. In effect, VMware’s own actions are building out the partner ecosystem for their competitors and thus legitimizing the competition.

Am I crazy? Am I wrong? What is a company like VMware to do, if anything? I’d love to hear your thoughts.

UPDATE: Some readers have pointed out, rightfully so, that “legitimizing” isn’t really the best word to use here. Perhaps “assisting” or “helping” is a better word?

Tags: , , , , , , ,

Hyper9 VMM Released

In case you hadn’t already heard elsewhere, our good friends at Hyper9 have released the final version of Virtualization Mobile Manager (VMM). VMM works with VMware Infrastructure 3, VMware vSphere 4, Microsoft Hyper-V, and Citrix XenServer 5. Users can use VMM with just about any mobile device, including the Apple iPhone, Blackberry, Google Android, and Windows Mobile devices.

VMM is available for users to manage up to five virtual machines for free. Hyper9 is also offering special introductory pricing of only $199 to manage up to 1,000 virtual machines.

Tags: , , , , ,

Watching VMware destroy their public image over this VMworld exhibitor agreement is like watching a train wreck: you want to take your eyes off of it, but it’s just so awful and so terrible that you’re mesmerized.

In case you don’t have any idea what’s going on, jump over and give this post a quick read. Done? OK, let’s continue.

In the update to that post, I said that VMware had clarified their position and that competition would be allowed at VMworld. Being the person that I am—I tend to take people at their word and trust that they are as honest and straightforward as I am—I left it at that. I was a bit curious to know why the exhibitors’ agreement contained language that was specifically targeted at their competition if all they wanted was a way to prevent exhibitors from behaving in an unseemly fashion, but rather than stirring up waters that had already been muddied I would just let things settle and see what happened.

Well, what happened was that Brian Madden—whom I have no reason not to trust, but at the same time I don’t know him personally—reports here that VMware is restricting the size of the booth that both Microsoft and Citrix are allowed to use. According to Brian, only VMware TAP Partners are allowed larger booths.

Alessandro Perilli—whom I do know personally, and I whom I know wouldn’t publish anything unless he was quite certain of his sources—also refers to Brian’s post in his own post here, lending further credibility to the claims of VMware’s actions.

So, let’s sum it up:

  • VMware adds language to their exhibitors’ agreement that is specifically targeted at their competitors in an effort to prevent unseemly behavior at VMworld.
  • VMware claims that competition will be allowed and they want to encourage a rich ecosystem of partners and competitors.
  • VMware limits their two key competitors, Microsoft and Citrix, to a 10 foot-by-10 foot booth, and further states that exhibitor employees must remain in the boundaries of their booth. (To be fair, VMware is also refusing to take their money for a larger booth.)

I tell my kids all the time, “Actions speak louder than words.” What would you derive from VMware’s actions?

Tags: , , , ,

The fine folks over at Hyper9 recently offered me a very limited number of special beta invitations for Hyper9′s new Virtualization Mobile Manager (VMM) product. As you may already know, VMM is the brainchild of Andrew Kutz, who recently joined Hyper9 and has already released a few snippets of code via H9Labs.

Here are some highlights of VMM:

  • Supports all major hypervisors: VMware Server 2, VMware Infrastructure 3 (VMware ESX and VMware ESXi 3.5, VirtualCenter 2.5), Microsoft Hyper-V, and Citrix XenServer 5
  • Runs as an Apache Tomcat web application, supported on Windows, Linux, and Mac OS X
  • Accessible from just about any mobile device: Apple iPhone, Blackberry, Google Android-based phones, and Windows Mobile devices
  • “Gracefully degrades” into Lite Mode if the mobile device doesn’t support all the web UI features

While the VMM beta is open to the general public, I have 15 special invitations that will grant additional benefits (extra perks, if you will). Specifically, these beta invitations will come with:

  • A 50% discount on the already low pricing for VMM once it is released
  • Automatic entry into a contest, starting in June, to win a mobile device
  • A limited edition Hyper9 T-shirt (assuming you provide a little feedback to the team at Hyper9)

Interested in one of these special invitations? Well, you’re going to have to work for it. Post a comment to this article telling me why you should be one of the lucky 15 readers who gets a special invitation. Telling me you’ll help promote my upcoming vSphere book might improve your chances…or it might not! I’ll leave comments open until Friday, May 22, or until I get 30 comments on the article, whichever comes first. From the comments on the article I’ll select the top 15 to receive the special invitation to the VMM beta.

In the event you aren’t interested in one of the special invitations, or if you read this article after the invitations have already been given out, you can also register for the beta from the Hyper9 community site.

So post your comment now!

Tags: , , , , , , ,

This is iForum 218, titled “XenServer 5: What the Other Virtualization Guys Don’t Want You to Know”. The presenters are Roger Klorese and Jill Skok.

Klorese starts out the presentation with XenCenter, the centralized multi-node management tool that ships with XenServer. As I’ve noted here on this site before, XenCenter’s real-time replication and multi-node behavior is, I believe, superior to the highly-centralized model that VMware uses with vCenter Server. He then goes through some of the features available in XenServer (which Klorese tells us really means “XenServer + Citrix Essentials for XenServer”) like XenMotion, High Availability, Dynamic Provisioning Services (which I believe refers to Citrix Provisioning Server aka Ardence), and Lab Management (OEM’d from VMlogix).

XenServer 5.5 is in beta currently. What are the features introduced in XenServer 5.5?

  • Active Directory authentication: This eliminates the need to login as root to manage XenServers via XenCenter. The granularity is a bit limited at the moment, but it is a big step forward.
  • Workload balancing: This feature, similar to VMware DRS, includes both live workload balancing as well as optimized (or intelligent) placement. Workload balancing is policy-driven, allowing users to select maximum performance or maximum density. Workload balancing can make decisions not only on CPU and memory, but also on network and disk statistics. Workload balancing does require a separate, Windows-based server (is this server in addition to the server running Essentials?).
  • Enhancements to the Xen hypervisor to add support for Intel EPT and AMD RVI virtualization extensions.
  • Worklow automation and orchestration: Workflow Studio gets incorporated into some editions of Citrix Essentials. I’m not sure how this is a new feature of XenServer 5.5.
  • XenCenter now adds organization view, to provide a different view of objects within XenCenter. This lays the foundation for more role-based administration and role-based views.

Klorese then moves into a discussion of XenServer’s storage integration technologies. This is StorageLink. Underneath it, XenServer underwent some changes. This enables snapshots on all types of storage repositories. A new feature, called LVHD, brings VHD layout to existing LVM storage repositories. This is a fast and simple upgrade to add new features.

Another new storage-related feature is backup enablement with Symantec NetBackup. From Klorese’s description, this essentially sounds like VMware Consolidated Backup (VCB)—in other words, it’s not a backup solution but a framework for enabling backups with other products. When XenServer 5.5 is finally released, there will be documentation and best practices available to configure this backup enablement with NetBackup.

Delving back into StorageLink, Klorese describes some of the functionality of StorageLink. StorageLink requires a separate server (this may share hardware with the Workload Balancing server mentioned earlier) in order to function; this is called the StorageLink Gateway Server. StorageLink Manager is where you can manage and configure StorageLink. Most of what is done in StorageLink is handled inside XenCenter (you do have to use StorageLink Manager when using Essentials with Hyper-V). Use of the command-line interface (CLI) is required for initial setup of StorageLink with the storage array.

StorageLink is evolving into Citrix Ready Open Storage, which opens up StorageLink to work with many more different storage vendors and their functionality with XenServer 5.5. Products that participate in Citrix Ready Open Storage will work with Essentials for XenServer as well as Essentials for Hyper-V.

What about VM portability? Klorese indicates that multi-hypervisor interoperability is made possible by StorageLink and Citrix Essentials. This allows a VM to be moved between XenServer and Hyper-V. Klorese also mentions XenConvert 2.0, which provides extensive P2V and V2V functionality.

The next portion of Klorese’s discussion focuses on total cost of ownership (TCO) of XenServer versus other virtualization solutions. Naturally, VMware is in his targets here (as fully expected). Klorese feels that free XenServer with a support contract meets the needs of the majority of the users. For all the other users, Citrix Essentials provides workload balancing, high availability, StorageLink, etc. It would be interesting to me to see the pricing of XenServer plus Citrix Essentials versus VMware Infrastructure 3 (or VMware vSphere 4).

As if Klorese read my mind, the next slide is exactly that. Although he doesn’t mention VMware by name, it’s clear that’s who they are talking about. Some points Klorese mentions are absolutely valid—using more RAM in the servers instead of worrying about memory overcommitment may make a lot of sense in some cases—other points aren’t quite so clear. For example, Klorese lists almost 10x the “advanced virtualization management” costs for the opponent, but not for Citrix XenServer. The basis for that is that High Availability and other advanced features are needed by all the servers in the farm, therefore there’s no need to license them and you can save money by not buying those licenses. In my mind, that’s a weighted comparison.

At this point, Jill Skok of Accenture takes the podium. Her discussion is about building a virtualization practice on XenServer. I was most interested in the technical aspects of the XenServer discussion, so at this point I wrapped up my coverage.

Tags: , , , ,

« Older entries