The Right vSphere Design Perspective

I was on a conference call the other day where the topic of the call was VMware View 5.0. If you attended VMworld—or even if you didn’t, it’s been kind of hard to miss—you know already that VMware made significant improvements to PCoIP in View 5. For one reason or another, the topic of the Teradici PCoIP offload card came up on the call, and other participants in the call were immediately asking about the availability of the card in mezzanine card format for blade server implementations. (Continue reading, and at the end I’ll tell you the answer I received to that question.)

There are lots of blade server implementations on the market; I’ve had direct hands-on experience with two of the leading implementations, Cisco UCS and HP BladeSystem C-Class. Both are excellent products, and both products have strengths and weaknesses. However, the one (relative) weakness they both share is a lack of expansion options. The HP BladeSystem blades are a bit better here, as they have onboard NICs (10Gb as well as 1Gb) without using an expansion slot. Either way, though, let’s face it—mezzanine card slots in a blade environment aren’t exactly plentiful. Now you want to go and use one of these slots for an offload card? This is especially true with Cisco UCS, where the half-width B series blades require a mezzanine card in order to have network connectivity.

In my mind, this really underscores the need to view vSphere designs—including designs that incorporate “upper layer” products like View, vCloud Director, and/or Site Recovery Manager—with a broad lens. Even in a dedicated VDI environment, where the ESXi hosts are used only to host virtual desktops, is the trade-off between network connectivity and offloaded PCoIP processing worth it? If adding a PCoIP offload card means you have to move from a half-width/half-height blade to a full-width/full-height blade (thus cutting your compute density in half), was it really worth it? Was it worth the extra rack space, extra power, extra network drops, and extra expense? This is why, as vSphere architects, we sometimes have to “take a step back” to look at things in a broader perspective. Otherwise, you could find yourself adding some piece of technology to your design and crippling the overall design.

<aside>This is not, by the way, a rant against the PCoIP offload card—I can definitely see some value in it for dedicated VDI environments. The offload card just happened to be the catalyst that triggered this post.</aside>

The answer, by the way, to the availability of the PCoIP offload card in mezzanine card format is “It’s up to the OEM.” Not much of an answer, I know, but the only answer that’s available right now.

If you have additional thoughts you’d like to share, I encourage you to add your thoughts in the comments below.

Tags: , , ,

8 comments

  1. Vijay Swami’s avatar

    Coincidentally, I had a customer ask me the same thing this week regarding the mezz offload cards for PCOIP. The Cisco UCS BU says they have no plans to support the card at this time. That could of-course change with customer demand.

  2. slowe’s avatar

    Vijay, I’m not surprised by that statement.

    Joel, I think you’re missing the point. You can certainly *make* the PCoIP offload card work using the PCIe expansion blade you pointed out, but there is still impact to the design. It’s not about whether or not the PCoIP card is available in mezzanine card format; it’s about making technology decisions without properly considering the impact of those technology decisions on the overall design.

  3. Justin Hart’s avatar

    Do not forget about Xsigo. Yes, they do not fall under the scope of a blade server implementation, except for one thing. HP has an infiniband switch module that takes places of HP’s Virtual Connect. This doesnt take up blades slots, instead it resides in the switch slots. Feel free to search for “InfiniBand for HP BladeSystem c-Class” for more details. Once this is done there is a vast change in the amount of FC and 10GE available to the blades. This even further changes your design decisions, because now you can use Xsigo to provision vnics and vhbas to your blades. The Blades can now have more 10GE and 8G FC. So, with this being said, you can have a little more flexibility in your environment. Regardless of the design decision, I just mention this because as you step back there are other options to using HP C7000. I mean, pass-through, Virtual Connect, and Infiniband can all be a good idea in the right circumstances.

    Cheers…

  4. Joel Lindberg’s avatar

    Scott, I got the idea with the post and I liked it.
    I just wanted to point out the option with the expansion blade since you did not mention that as an alternative solution.

  5. slowe’s avatar

    Justin, yes, Xsigo would be an option, and—as you point out—would have numerous impacts on the design in a variety of areas. It’s important to understand that while you can have “more” 10GbE NICs and 8Gb FC HBAs, all those virtual NICs and virtual HBAs have to be backed up with actual bandwidth on the other side of the I/O Director. This is, of course, one of the areas that you must consider in your design, lest you oversubscribe your design and end up with poor performance.

    Joel, thank you for pointing out that there is an alternative solution—sorry that I misread your comment. I’m glad you liked the post.

  6. Andrew’s avatar

    Has there been any progress on the availability of offload cards in a mezzanine format? I’m about to deploy a new HP C7000 solution dedicated to View and am very interested in the offload option. Thanks everyone.

  7. slowe’s avatar

    Andrew, I don’t have any additional information to share at this point. The best advice I can offer is to talk to (in your case) HP directly to see what they have to say. Thanks for your comment!

Comments are now closed.