Mudslinging Between Cisco and HP

I’ll preface this article by saying that I am not (yet) an expert with Cisco’s Unified Computing System (UCS), so if I have incorrect information I’m certainly open to clarification. Some would also accuse me of being a UCS-hater, since I had the audacity to call UCS a blade server (the horror!). Truth is, I’m on the side of the customer, and as we all know there is no such thing as a “one size fits all” solution. Cisco can’t provide one, and HP can’t provide one.

The mudslinging that I’m talking about is taking place between Steve Chambers (formerly with VMware, now with Cisco) and HP. HP published a page with a list of reasons why Cisco UCS should be dismissed, and Steve responded on his personal blog. Here are the links to the pages in question:

The Real Story about Cisco’s “One Giant Switch” view of the Datacenter (this was based, in part at least, on the next link)
Buyer beware of the “one giant switch” data center network model
HP on the run

I thought I might take a few points from these differing perspectives and try to call out some mudslinging that’s occurring on both sides. To be fair, Steve states in the comments to his article that it was intended to be entertaining and light-hearted, so please keep that in mind.

Point #1: Complexity

The reality of these solutions is that they are both equally complex, just in different ways. HP’s BladeSystem Matrix uses reasonably well-understood and mature technologies, while Cisco UCS uses newer technologies that aren’t as widely understood. This is not a knock against either; as I’ve said before in many other contexts and many other situations, there are advantages and disadvantages to every approach. HP’s advantage is that leverages the knowledge and experience that people have with their existing technologies: StorageWorks storage solutions, ProLiant blades, ProCurve networking, and HP software. The disadvantage is that HP is still tied to the same “legacy” technologies.

In building UCS, Cisco’s advantage is that the solution uses the latest technologies (including some that are still Cisco-proprietary) and doesn’t have any ties to “legacy” technologies. The disadvantage, naturally, is that this technological leap creates additional perceived complexity because people have to learn the new technologies embedded within UCS.

Adding to the simple fact that both of these solutions are equally complex in different ways is the fact that you must re-architect your storage in order to gain the full advantage of either solution. To get the full benefit of both UCS and HP BladeSystem Matrix, you need to be doing boot-from-SAN. (Clearly, this doesn’t apply to virtualized systems, but both Cisco and HP are touting their solutions as equally applicable to non-virtualized workloads.) This is a fact that, in my opinion, has been significantly understated.

Neither HP nor Cisco really have the right to proclaim their solution is less complex than the other. Both solutions are complex in their own ways.

Point #2: Standards-Based vs. Proprietary

Again, neither HP nor Cisco really have any room to throw the rock labeled “Proprietary”. Both solutions have their own measure of vendor lock-in. HP is right; you can’t put an HP blade or an IBM blade into a Cisco UCS chassis. Steve Chambers is right; you can’t put a Dell blade or a Cisco blade server into an HP chassis. The reality, folks, is that every vendor’s solution is has a certain amount of vendor lock-in. Does VMware vSphere have vendor lock-in? Sure, but so does Hyper-V and Citrix XenServer. Does Microsoft Windows have vendor lock-in? Of course, but so does…so does…well, you get the idea.

HP says VNTag is proprietary and won’t even work with some of Cisco’s own switches. OK, let’s talk proprietary…does Flex-10 work with other vendor’s switches? The fact of the matter is that both Cisco and HP have their own forms of vendor lock-in and neither can cry “foul” on the other. It’s a draw.

Point #3: The “Giant Network Switch”

At one point in HP’s article (I believe it was under the Complexity heading) they make this point about the network traffic in a Cisco UCS environment:

In Cisco’s one-giant-switch model, all traffic must travel over a physical wire to a physical switch for every operation. Consequently, it appears that traffic even between two virtual servers running next to each other on the same physical would have to traverse the network, making an elaborate “hairpin turn” within the physical switch, only to traverse the network again before reaching the other virtual server on the same physical machine. Return traffic (or a “response” from the second virtual machine) would have to do the same. Each of these packet traversals logically accounts for multiple interrupts, data copies and delays for your multi-core processor.

I do have to call “partial FUD” on this one. In a virtualized environment, even a virtualized environment running the Cisco Nexus 1000V, traffic from one virtual server to another virtual server on the same host never leaves that host. HP’s statement seems to imply that’s not the case, and as far as I know it is. However, HP’s statement is partially true: traffic from one virtual server on one physical host does have to travel to the fabric interconnect and then back again in order to communicate with a virtual server running on a physical host in the same chassis. The fabric extenders don’t provide any switching functionality; that all occurs in the interconnect. Based on the information I’ve seen thus far, I would say that using Cisco’s SR-IOV-based “Palo” adapter and attaching VMs directly to a virtual PCIe NIC would put you into the situation HP is describing, which then just reinforces a question that Brad Hedlund and I tossed back and forth a couple of times: is hypervisor-bypass, aka VMDirectPath, with “Palo” the right design for all environments? In my opinion, no—I again go back to my statement that there is no “one size fits all” solution. And considering that the use of hypervisor-bypass with “Palo” would put you into a situation where traffic between two virtual machines on the same physical host has to travel to the fabric interconnect and back again, I’m even less inclined to use that architecture.

In the end, it’s pretty clear to me that both HP and Cisco have some advantages and disadvantages to their respective solutions, and neither vendor really has the room to label the other as “more complex” or “more proprietary” than the other. But what do you think? Do you agree or disagree? Courteous comments (with full vendor disclosure) are welcome.

Tags: , , , ,

  1. Dave’s avatar

    I think the interesting thing is this benefits both as it is getting more and more people thinking about this new methodology vs traditional data centers.

    I think the big loser so far is Dell as according to their Analyst reports they do not see this as an important area. That is despite having OEM’d the PAN system from Egenera which is where the original idea for this came from. Heck may of the Engineers behind Cisco UCS came from Egenera.

    So, agree or disagree, the complexity will be reduced with both. We really will not know until your start seeing more of an install base. Until then most of it is marketing FUD.

  2. Nik Simpson’s avatar

    To be fair to HP, I think there comment about not being able toput HP blades in a UCS environment is being misinterpreted. The point is (I think) that it will be difficult to integrate HP or Dell (though that may change) or IBM blades and blade chassis into a NEXUS style UCS network because of Cisco’s proprietary extensions to Ethernet that others don’t yet support.

  3. slowe’s avatar

    If that is indeed the case, Nik, that I would agree with HP’s point–it is, at this time, very difficult (impossible?) to fully integrate non-Cisco blades into a Nexus-style UCS network. On that point, the “vendor lock-in” argument against Cisco becomes much stronger.

  4. Ken Oestreich’s avatar

    Disclosure: I work for Egenera….

    All good comments. I would like to point out re: Dell & Egenera’s PAN Manager that (1) that “original” idea has thousands of installed locations globally, (2) the solution is based on standard Dell blades *and* standard Ethernet, (3) there is no reason why it can’t be ported to other platforms, and (4) from a complexity standpoint, it involves far less software & consoles than either HP or Cisco. On that last point, to operate HP, you’ll need to use nearly a dozen HP products; on the UCS side, UCS Manager essentially requires the use of 3rd party products like BMC Bladelogic, VMWare etc. before you have an operable product.

    But finally, let me up-level: I must agree with Dave – that I’m thrilled that this new approach to managing servers/infrastructure (from so many vendors) is finally getting the market exposure it needs. It really is fantastic, and is the perfect complement to hypervisor-based virtualization.

  5. Justin’s avatar

    Disclosure: I work for an HP partner.

    I think your comment about Flex10 vs. the UCS network system isn’t quite an apples to apples comparison. HP’s point is that beyond the blade chassis itself, Flex10 has no further restrictions. HP really doesn’t care whose Ethernet switches you use outside of the blade chassis itself, whereas Cisco requires not only their chassis hardware but also their external switches.
    So yes, they’re both some form of vendor lock-in, but I would say that Cisco’s is to a greater degree.

    In my opinion, what the really ridiculous thing is that for the last two years, Cisco has been trying to steer customers away from HP’s VirtualConnect, making stinks about how it’s a bad idea or it’s not compatible with their switches or it will give you support problems. Then they turn around and implement the exact same thing in their own blade solution, and suddenly it’s okay? Typical FUD, like Microsoft claiming live migration is a bad or useless technology until they have it themselves.

  6. Steve Chambers’s avatar

    Scott, thanks for providing a 3rd eye on this. My response to the HP “article” was meant to be light hearted and generate discussion. There is more, detailed information coming out of the proper channels at Cisco as UCS takes off in customer sites around the world. I think part of the problem with FUD at this point is that UCS is new and so once more technical, operational and business details emerge then better comparisons that are more valuable to customers, can be made. Cheers, Steve

  7. slowe’s avatar

    Justin,

    Thanks for your comment. You are correct—Flex-10 has no further restrictions once you are outside the chassis. But then Cisco could say that there are no further restrictions on the network once you step outside of UCS. My point is not to knock HP’s Flex-10 or to knock Cisco’s UCS, but rather to point out that each of them has their own form of complexity and their own form of vendor lock-in. Neither has room to cry foul to the other.

    Steve,

    I did point out in the article that your response was intended to be light-hearted and to take that into account, but thank you again for making that clear. Clearly, as more solid technical and operational information becomes available on both UCS and BladeSystem Matrix, then—as you so rightly point out—more valuable and pertinent comparisons can be made.

    Thank you both for reading and commenting!

  8. Daniel Bowers’s avatar

    Great thread.

    I liked Steve’s post. He said it was for light-hearted fun, and I think he writes well. Plus, it’s cool that UCS is injecting more voices into our blogs! (For those who don’t know me, I’m an HP employee, working in the
    BladeSystem development team. Same disclaimer as Steve: Thoughts herein are my own, and not my employer’s!)

    Scott nails why it’s hard to jump from Steve’s humor to serious debates right now: Scott’s not an expert on Cisco UCS (yet). The pool of Scott-like guys with UCS experience isn’t big enough for the user community to render judgements today.

    Scott, I do want to hear your take on the “seperate silos between networking & servers” theme.

    I agree w/Nik, it’s not about whether whether you can plug a device from Vendor A into a chassis from Vendor B. The question is, can Vendor A products and Vendor B products co-exist in my server room, and will a common set of best-practices and management tools work with them both?

    Re: Scott’s comment about Flex-10 working with other vendor’s switches. I know he’s going to scold me (again) for saying this, but a Virtual
    Connect module (and Virtual Connect Flex-10) aren’t switches. They DO work with Cisco switches. I get your point, tho — to do Flex-10 partitioning and bandwidth control, you gotta have a a Flex-10 capable interconnect in your blade enclosure…and today that means an HP one. OK, I’m ready for your barbs about Virtual Connect not being a switch :)

  9. Brad Hedlund’s avatar

    Scott,
    Another well written, unbiased, fair perspective on the important topics of the day. Hats off to you again.

    For those that say Cisco is using “Proprietary” extensions to Ethernet … well, that’s simply not true. VNTag is NOT proprietary. VNTag is a proposed IEEE standard. FCoE is NOT proprietary. FCoE is a fully ratified standard.

    If the primary concern with UCS is “newer technologies that are not widely understood” … this is good news to me … because in 6-9 months that will no longer be the case. Many gents like myself are gainfully employed at Cisco to educate customers on these newer technologies employed by UCS.

    Cheers,
    Brad Hedlund
    (vendor disclaimer: Cisco Systems) :-)

  10. Jayadeep Purushothaman’s avatar

    Agree with you Scott – extending the complexity logic would mean that Blades are a complex solution and in fact virtualization itself is another complex solution. But it depends on the perception of value that these solutions provide for the users. But invariably vendors make it far more complex than it should be with an eye to lock in the customers. But as you rightly point out, there are no real lock-in free solutions out there.

  11. Eric Wenger’s avatar

    The only case where traffic would leave the box to traverse between VMs is with Cisco’s virtualized NIC. That NIC would allow each VM to use VMDirectPath to have physical access to its own NIC. From my recent benchmarking work with IxChariot and a couple of the major players in this space, it may be more efficient in the long run to go off the box in favor of reducing the load from the vSwitch.

    One question for anyone else out there — how do you measure the impact of vSwitch-ing on CPU utilization? Certainly it must be frustrating to have a hidden process chewing up processor time that should be allocated to the VM.

  12. Daniel Bowers’s avatar

    Brad makes two good points re: VNTag & FCoE, but I wanted to dig further.

    VNTag is NOT proprietary: It’s not a standard, either. Cisco’s suggested VNTag to IEEE as an extension to 802.1, but AFAIK Cisco’s the only vendor who wants to offer it. To be fair, Flex-10 isn’t an industry standard either; but it’s also not a protocol, so it doesn’t need IEEE ratification for people to adopt it. You can use Flex-10 and still use whatever upstream switch you want; whereas with VNTag, you have to use Nexus switches at the edge.

    FCoE is NOT proprietary: True; T11 passed the standard for encapsulating FC on Ethernet. However, since there’s no standard or agreement on CEE / lossless Ethernet, and FCoE requires that, you’ve only got half the solution.

    (Disclosure: I work for HP.)

  13. Brad Hedlund’s avatar

    Daniel,

    VNTag was brought to the IEEE standards body by both Cisco and VMWare.

    You’re right, the FlexNIC / Flex-10 relationship does not use a protocol and as a result has severe limitations, such the one where a VLAN must be unique to only one FlexNIC per LOM. This pretty much reduces the value of FlexNIC to that of a per-VLAN rate limiter, rather than providing true network interface virtualization (NIV). The reality is adding a new tag to the frame is the easiest and most cost effective way to provide true NIV, such as a VNTag. I know HP has figured this out well with your proposal of VEPA which also specifies a tagging method, although none of your current shipping gear (Flex-10) has VEPA capabilities.

    Do you understand that CEE / DCB is a collection of standards (some ratified, some not)? And the ONE standard that enables a lossless Ethernet fabric (802.1Qbb Priority Flow Control) is fully ratified.

    Of course if you have a switch such as Flex-10 that does not provide lossless Ethernet forwarding — you don’t just have half the solution, you have none of the solution. Any Flex-10 modules sold today will need to be trashed to implement a unified fabric with FCoE and true NIV. It’s tough to break that news to your customers, so I can understand your position.

    Cheers,
    Brad

    (vendor disclosure: Cisco Systems)

  14. slowe’s avatar

    I’ll just remind everyone to keep the comments courteous, and be sure to provide full vendor disclosure. Otherwise, let’s keep up the great discussion that’s brewing here. Thanks!

  15. Russell’s avatar

    Disclosure: I work for a VMware partner who is also a Cisco partner.

    I just wanted to address the whole “upstream switch needs to be cisco” thing. Yes, you do need the 6100 fabric interconnect (this is where UCS management lives); however you can uplink to an extreme ethernet core with a brocade SAN fabric if you so desire. Obviously Cisco would like you to uplink to a nexus 7k core with an MDS SAN fabric but there’s no major interoperability issues with other switch vendors.

    VNLink is basically an optional feature as are things like Hypervisor pass-through. I think its a bit disingenuous to focus on the secret sauce from one vendor while touting ones own secret sauce. At the end of the day you can hand off standard ethernet and FC to whatever switch vendor you want from either platform.

  16. rodos’s avatar

    Disclosure : I work for an integrator that does both UCS and Cisco.

    On the Palo and DirectPath. I don’t see Palo as simply a way to do DirectPath. I think it will be used more for creating segmentation and QoS. For example there may be a number of FC interfaces with different bandwidth and QoS associated to each. People may use it for Tiering of storage and network traffic. There will be ways of doing this at other points, say via the N1K but Palo will be a tool in the toolbox.

    Rodos

  17. Bobby’s avatar

    Does anyone know if VNtag is actually something that is usable right now today on the shipping UCS product ?

    I am told by CISCO employees that Vntag is not available today. NXos cannot be configured for vntag. There is no command to enable it. Cisco said that nxos will not support it till end of this year at best. No one can use it if they wanted.

    True ?

  18. Frank’s avatar

    VNTag is already running on the UCS, Nexus 5k and Nexus 2k. All forwarding in UCS implies a default VNTag, imposed by the 2104 and communicates with the 6120 , whether the adapter has interface virtualization support or not. This is similar to a default vlan configuration inside of a switch.

    Two things to understand with DCBX and the Cisco virtual adapter. The Cisco adapter is specifically designed for technologies such as VMDirectPath and there is more to the story that just assigned a VF to a VM.

    DCBX opens the communication between the adapter and the fabric interconnect, then the VN-Tag protocols negotiate logical interfaces. If you have network experience, think of ISDN D-channel, Frame-Relay DLCI, RSVP, or ATM SVC’s. There is a signalling protocol the determines how to enable virtual interfaces. The policy assigned to those virtual interfaces will manipulate COS values.

    The issue of regarding encapsulation of protocols and new standards are always in flux. HP wanted a different advantage point than that proposed and co-developed by VMware and Cisco. This is the usual vendor to vendor jousting.

    The fact of the matter is that the major beneft of FCoE is not in its name. It is to do access-layer consolidation of physical infrastructure and to have a wire-once, all storage model. You may argue that blades already do that and I will suggest to you to do a exploded-engine view of the parts and then re-ask that question. The implementation today, to maximize its value is standard today regardless of the other working group activities. Don’t confuse vendors dragging their feet or being behind with standards. Also, if you want to discuss FIP, multi-hop, etc., that is different than what ANSI just ratified.

    Flex-10 is closer to a TDM SONET network than it is a virtualized adapter. HP did that techology to reduce IO modules in their chassis, period. They sacrifice capabilities as a result. Once you create Flex-Nics, you no longer have 10Gbps, which is a fundamental difference. The Cisco approach with virtualized adapters will allow you to create many more interfaces than four per IO module and it will not force you to buy isolated IO modules.

    If we really want to get into this, we could bring up only 2MB of shared buffer for the large number of 10G servers HP will have riding on that card, the lack of QOS – yes – bit limiting an interface is not QOS, but it is TDM. We could bring up the lack of enforcment of QOS – don’t be confused with a NIC that can mark COS bits… that is not QOS either. We could also ask that if we are really focused on virtualized workloads, HP is conspicoulsy absent from VMware DVS – hmmm, NO VC for VMware – could it be for lack of features?? BTW – HyperV works just fine on UCS also.

    HP isn’t standing still on the Flex-10, but enough said on that.

  19. Bobby’s avatar

    “Flex-10 is closer to a TDM SONET network than it is a virtualized adapter. HP did that techology to reduce IO modules in their chassis, period”

    I don’t know too many application servers that require multiple 10GB loms. The issue here is HP addresses physical and virtual servers with the technology. It’s not just a vmware play. That said, the attractivness is to reduce the spend of the i/o modules. Hitting CISCO right in the belly.

    Yeah, you lose some features, but I’m not sure the VNtagging is truely something worth sticking around for when it requires you to buy proprietary ethernet.

    That said, HP’s Flex 10 requires you to buy flex 10 enabled blades / servers and IO modules so.

    All I know is if I outfit a bladesystem chassis with 6 3120′s today and want that same bandwidth with Flex 10 … the customer can save 50K to the infrastructure without sacrificing any performance.

    Customers are not jazzed at all about a version 1 UCS system, with zero market penetration and using non standard ethernet (when Procurve is nipping at their heels) and teamed up with BMC, EMC, NETAPP and everyone else in the world. People will look at DELL for that kind of hodge podge relationship style “Run my Datacenter” strategy. At least DELL has 2 years behind them in that hodge podge style.

    HP and IBM are the real players here. If you can’t see that then everyone blogging out there is so nieve.

    Companies will test and prod at UCS because it is what they do, but adoption of this system is a risk to any business, large or small. If they are still around selling it’s Generation 3 of it….I bet they do gain some marketshare.

  20. Frank’s avatar

    Actually, HP restricting the feature set of VC to be put on a real switch was HP’s punch to Cisco in the belly. The same reason HP delays many feature updates to a good switch.

    I think it is really funny that HP scat is mixed in with this mudslinging thread by claiming a proprietary switch. Tell me what other IO modules run the VC function within a C7000 or C3000. Why does HP require their VC FC module to do vWWN functions; because a VC Ethernet who runs the VC management function make it mandatory. I guess this is not proprietary.

    By the way, here is a link to an HP Nexus 5000 server, that they do not appear to be claiming as proprietary (which runs standard Ethernet, standard FC, standard FCoE, and runs VN-Tag)

    http://h18006.www1.hp.com/storage/saninfrastructure/switches/5000nexus/index.html

    HP must like something on one hand and not like it on the other… but no FUD or scat being tossed in the previous post.

    You need to test performance beyond the BS marketing of the 10G platform. When a 3120 offers more interchassis bandwidth than a 10G switch from HP, including a 2MB shared buffer for many 10G ports, another segment of HP scat.

    Risk. I just love every competitor over the years who make BS (read blowing smoke), marketing statements over risk. The fact of the matter is that any new product, from any vendor increases risk. Any new software is risk and customers must do their due diligence for every product.

    A couple of things to note… How many of the “TOP” people have left HP from the c-Class team to join Cisco. How many from the top server vendors and why.. Not because their is any more risk than any other product….

    They do it because of value… let’s post again on this topic over the next 4 quarters when Cisco reports UCS revenue… You can find the Cisco con call information on the company website….

    (Editor’s Note: This commenter did not provide vendor disclosure, but it came from a Cisco IP block and used a Cisco.com e-mail address, so it’s reasonably safe to assume he is a Cisco employee.)

  21. Jay’s avatar

    Thank you for the very informative discussion. The
    following is just my opinion and I am not representing any company
    here. I wish to suggest we add comparisons for ROI and TCO while
    these two companies jockey for our attention and commitment. One
    must spend wisely as not to become the only casualty of this war.
    HP wins on density with their C7000 chassis but I love the value
    and performance of 2 UCS chassis with 16 servers, what it costs to
    acquire, deploy and grow. I also like the performance of the
    Nehalem processors and maximum Extended memory in the UCS 440
    blades vs. equivalent HP Blades when running VMware. With Balboa,
    Palo cards and my nexus switches I can leverage my legacy Cisco
    rack mount servers that were cheaper and faster than Hp 1U servers
    when we acquired them. The UCS is more complex to deploy initially
    as it forces us to be strategic but expansion and change is much
    simpler and way cheaper than the same with HP’s solution. I find
    UCS flexibility much better for IT Operations as every build in HP
    introduces rework unless you fork up the $hefty for RDP. UCSM is
    built into the UCS platform and even the two 6100 Fabric
    interconnects cost less than RDP. HP is more mature but Cisco is
    learning and growing fast. This battle between the two has even
    opened our eyes to the challenge between Xiotech and HP EVA.
    Oversubscription and performance came up in our comparisons of
    blade systems, and we looked at new ways of deploying SANs. Booting
    from SAN, managing MDS switches and supporting FC fabrics with
    Cisco’s family of MDS 9000s (some rebranded for HP) = about the
    same. We never chose HP Virtual connect solutions because it took
    HP too long to catch up with the Cisco MDS 9124s and 3120s (offered
    for the HP blades) 3 years ago. We required the performance and
    bandwidth of the Cisco interconnect devices when VC was two weak to
    deliver. VC also did not buy us anything we couldn’t get with Cisco
    Flex Attach as we always booted from SAN. Cisco offered stacked L3
    switches with massive uplinks and lower over-subscriptions/blade
    than HP blades with VC. UCS is 2:1 Over-subscribed and with the
    UCSM I can pin line rate to my servers thus providing 10G/s FC and
    10GE Ethernet to each of my pinned blades ? the 6100 until failover
    is necessary. Try that with 6 GE NICs and two HBAs in HP. Even with
    virtual NICS HP is still behind at 4 while Cisco UCS now offers me
    58. Having a homogenous environment with HP has its advantages but
    Cisco introduces standards and new technologies which when paired
    with best in class hardware, software, configurations and processes
    - shifted the paradigm to a focus on cost while rapidly moving us
    closer towards an “IT Operations” utopia. Just my two cents and I
    work for neither – I am a customer of both who is waiting for HP to
    get serious about “inventing” and hit back. Cisco is behind but
    gaining very fast while using aggressive pricing to make the fight
    benefit us: the customer.

  22. Adrian’s avatar

    Lol, The guy above me just broke through a wall yelling Oooohh Yeeaahhh!!! Listen here Kool Aid.

    I’ve looked very hard at both solutions as I’ve been selling both since they’ve emerged. I have over a decade of technology of experience with both HP and Cisco. I’m currently in a battle royale with 2 clients and this time I’m on the HP sideline for both.

    I’ve been both given and sold on the Cisco product offerings for years now. I have sold UCS with tremendous success in the past but there a few large caveats that Cisco camoflouges deliberately. That’s right, look out! Here’s a sales guys perspective, and I’m opening the floodgates!

    To start – I like Cisco switches. They are proven and they are rock solid. Their firewalls are nice as solid state firewalls go, not my first choice for any enterprise of scale. UCS on the other hand, I wouldn’t wish that on my worst enemy. I’ve been through the Cisco training, I’m certified to sell UCS, and I’ve been to plenty of technical sessions on the solution from it’s inception. I, also, sell HP, EMC, NetApp, Brocade, IBM, the list goes on, and have a fluency with VM Ware. I’d say based on the majority of sales people that I’ve met, I’m a bit more more technically savy than most. You can’t BS a BSer and I’d like to clear a few things up…

    The largest being the upfront spend in relation to the ongoing Capital Expense with UCS. Yep, the “S-word”, but this is just the icing on the cake as you’ll come to see.

    (What’s the difference between selling heroin and Cisco? Your drug dealer does not charge an ongoing fee to ensure your needle is sharp.)
    (What if your drug dealer offered you SmartNet on your intraveinus needle? Yes he’d be richer but the answer is – You’ll get a new ceringe if the needle breaks, unfortunately, the plunger is not covered.)

    I digress.

    Granted, the thought behind the UCS “concept” is compelling when viewed from 40,000 feet, and for certain existing environments it may actually be a wise spend. (A mid to large environment consisting of existing Nexus Series switching with archaic servers, or looking to move to blades for the first time with Nexus already in place.)

    For the 2 clients I’m working with, it’s ludicris that they are even entertaining the though of moving in the UCS direction… One has IBM blades, the other HP.

    Here’s a bit of UCS info everyone should be made aware of-

    #1.) Unless you’re planning on purchasing or currently have a Nexus 7000 switching environment you will have no layer 3 functionality and seriously limit any sort of VLANing capability within your environment. UCS is dependent on a very nice core switching environment.

    #2.) That being said, UCS is what it is… A Giant Switch. Turning your Serving Infrastructure into a Network is a path which seriously needs to be looked at…hard. You’re now allowing a technical college grad the ability to modify a switch on the network with the possible repurcussions of bringing down your entire server farm. At the same time, you need to develope an entirely new core competency level amongst your entire staff. Change management at it’s finest is a prerequisite people.

    #3.) Aggressive pricing… Operations Utopia? Look at the TCO. ANNUAL SmartNet on every appliance, ANNUAL SmartNet on every additional module needed to make UCS work in the first place, heating and cooling cost’s of UCS is 40% higher than your top 3 blade solutions, data center space eaten up for 1/2 of the functionality, and best of all, you get a management console that is complex in nature and severley limited in terms of functionality. “pods, pods, pods” Infrastructure Managers now have to hire pod people! Do NOT overlook the software that binds!

    #4.) What happens if a blade failure occurs? Maybe there’s an additional software package you can purchase that notifies you in order to address this by now. If I were Cisco I’d make it a point… and tack that on to your ever growing SmartNet spend.

    #5.) Throughput. Every physical server on a UCS platform is “pinned” to another. In order to provide those highly available virtual resources and actually GROW your environment, it requires precise preconfiguration and entails some truely cumbersome procedures if it’s deployed wrong. Here’s an example, if both physical servers which are pinned together suddenly take on a spike in demand, virtual server allocation, one will choke the other off of bandwidth. If you forsee scaling out your UCS in the future, you’re in for a juggling act that requires extensive physical reconfiguring of the UCS chassis, granted you only have a max of 8 blades available, but that’s another flaw in itself. You damn well better know what is where “physically” moving forward in order to provide the level of virtual resources to your users that Cisco claims to bring you.

    The Future… What does this lead to? More of a Cisco spend of course! Why would anyone go back and rearrange their physical environment multiple times to accomodate their VM’s, demand, and their users when it’s sooo much easieer to just buy another UCS Chassis! While you’re at it, since the design is reliant upon a very nice switch, and you want layer 3 functionality of course, you’ll need another Nexus 7000! (I think that’s why they named it a Nexus.) Cisco- we guarantee you’ll buy your “NEXt switch from US.” :) I’m coining that btw. You’re growing and it’s now justified to take on the additional spend due to additional anticipated future growth. This is the Cisco sales model, plain and simple.

    Sorry Cisco, but the HP Virtual Connect environment is far more scalable, far more “inventive”, and it’s proven data center technology that works extremely well. Our friends at Gartner and Forrester agree. (BTW, FCoE is your only option with UCS if you want to get 1/2 of the functionality Cisco totes… Two words for you if your immediate future deems a move FCoE- Flex Fabric.)

    The HP BL680c G7 delivers 5X times the I/O bandwidth and 4X the memory than the comparable Cisco UCS B440M1 blade allowing HP to deliver up to 4X as many virtual machines per server. HP has 273 vs Cisco’s max of 67 VM’s per server to be exact. I almost forgot, the I/O Max Bandwidth- HP 192GB vs Cisco’s 40GB!

    Also, if you’re looking at apples to apples DO NOT let Cisco tell you what blades to compare. They seriously attempt to do this by having a customer compare a BL200M2 head to head with an HP BL680 in order to show any sort of TCO! Cisco’s sales pitch is ussually well refined, but comparing half length blades to HP’s full length blades is a bit of a stretch. Just do the correct comparison when making a decision, sales pitches are taught to be well polished on both sides, but look at the technology. While you’re at it – check out the A Series line of switching from HP if you want to compare who’s been more innovative and inventive when infingringing upon the others marketshare. :)

  23. slowe’s avatar

    Adrian, your comment seems to imply some common misperceptions about Cisco UCS. If you’d like, I’d be happy to discuss the pros and cons of both UCS and HP BladeSystem with you and get your perspective as well (and yes, there *are* pros and cons to both technologies). Assuming, of course, that you are actually interested in a balanced discussion of the technologies.

  24. Adrian’s avatar

    That would be fantastic! Shall we have a little mock head to head scenario for building a nuetral data center environment? Let’s say 3 interconnecting Chassis’ of each solution with supporting blades and switching of your choice. We are connecting to a SAN that’s in place – any vendor is fine. VM Ware is heavily relied upon and the data center supports 2000 users. Add any additional information you wish (ISP bandwidth, etc…) in order to conceptualize and create the very best Cisco solution and tell me your reasoning. I guarantee that I can build a better solution for less with HP. We will compare solutions, probably refining areas along the way, but let’s base the outcome on who is able to offer the most throughput, VM’s, TCO, data center space utilization, effiecency, etc… to the mock environment. Pro’s and Con’s will all be taken into account and we’ll see what comes of it!

  25. slowe’s avatar

    Adrian, I don’t believe that 100% Cisco is necessarily right for every customer. Likewise, I don’t believe that 100% HP is necessarily right for every customer. What *is* right for every customer is focusing on the functional requirements and matching the products against those functional requirements.

    Now, having said all that, what I offered to do with you was discuss what appear to be some misconceptions you have about Cisco UCS compared to the HP BladeSystem. I’m not interested in some sort of comparative contest—only in understanding and helping others understand the technologies and their advantages and disadvantages. Cisco UCS has some advantages—and some disadvantages. HP BladeSystem has some advantages—and some disadvantages. Knowing these advantages and disadvantages allows us to create the best solutions for customers.

    BTW, it’s “VMware” not “VM Ware”. A little thing, I know, but VMware gets a little tweaked when you use it incorrectly.