More on Cisco UCS

The entire IT world is abuzz with talk of Cisco’s Unified Computing System (UCS). I pointed out a few of the various UCS announcements in this earlier post, and now I’d like to just expand a little bit upon the solution.

UCS essentially consists of 4 major components:

  • The UCS 6100 Series Fabric Interconnect devices, running Cisco UCS Manager
  • The UCS 2100 Series Fabric Extender, with up to 2 of them running in each chassis
  • The UCS B-Series blades, either half-width (8 blades per chassis) or full-width (4 blades per chassis), and up to 40 chassis per system
  • UCS network adapters supporting DCE/CEE/DCB and FCoE, apparently coming in three different flavors (efficiency/performance, compatibility, and virtualization)

This diagram shows an overview of UCS:

cisco-ucs-components-overview.gif

With the exception of the UCS Manager software and the Converged Network Adapters (CNAs), everything else is pretty standard stuff:

  • The UCS 6100 is essentially a Nexus 5000, but with the ability to run the UCS Manager software.
  • The UCS 2100 is essentially the same as the Nexus 2000 Fabric Extender (FEX), but in a form factor that is intended to plug into the UCS chassis.
  • The B-Series blades are industry standard blades running Intel Nehalem CPUs, standard hot-plug hard drives, and 10Gb CNAs.

The CNAs appear to be one area in which there may be some innovation. In particular, the virtualization-optimized CNA appears to extend some new functionality into the virtualization layer itself, although it’s currently unclear exactly how, or how the virtualization layer will leverage that functionality. It sounds like SR-IOV to me, but others are indicating that it’s an offshoot of Intel’s VT-d technology. Speaking specifically with regard to VMware ESX/ESXi, I would guess that this will need to be combined with VMDirectPath, as it appears to replace the need for the vSwitch within the ESX/ESXi host. Personally, I’d rather not replace the vSwitch and instead allow the UCS 6100 and/or UCS Manager to manage Nexus 1000V VEMs on the ESX/ESXi hosts instead. This will truly bring unification without adding complexity.

The real wildcard here is UCS Manager. Although the Cisco webcast spoke frequently of the “open APIs” and “XML APIs” that other partners can leverage, but nothing substantial or significant was released regarding UCS Manager. Lots of questions have yet to be answered, but the one that really jumps out at me is this one:

How will an organization need to organize their storage in order to take advantage of UCS?

I’m guessing here that organizations will need to do boot-from-SAN in order to gain the true flexiblity and agility that UCS is supposed to provide. In that case, what Cisco is supplying is not that dramatically different from a multi-vendor solution that utilizes something like Scalent to provide automation. Of course, Cisco’s solution is from a single vendor and is supposedly more integrated.

So, there are my initial thoughts. What about you?

Tags: , , , ,

  1. TimC’s avatar

    Much adieu about nothing to me. I also think it’ll be done before it’s gotten off the ground if they’re really tying themselves as tightly to EMC as EMC claims.

    Also, I’m on the same page as you… they’ve essentially told us nothing so far. What will make or break this is the software, which we haven’t so much as seen a screenshot of. If the software sucks, this will be no better, and possibly worse, than sourcing from multiple vendors.

  2. Chad Sakac’s avatar

    Scott – the adapter virtualization isn’t based on SR-IOV (or MR-IOV), but instead is the untold story here, IMHO.

    The network adapters have their own discrete roadmap over the next few months (and all need to consider that Cisco couldn’t say everything before the official Nehalem launch also gang). The initial shipping adapters don’t have Cisco’s full sauce, the followon (very close) Palo adapters do. It’s extensions to the ethernet frame standard (being submitted to the standards bodies)

    When I was first disclosed on this about a year ago the VN-Link idea – the funny thing was the design point was the UCS design – the ability to have NO distributed switch in the vmkernel, but carry the vnic MACs external to the hypervisor per se. As a side effect, the bonus was that you could p-to-v a Nexus and get the N1Kv (instead of using VN-link to carry the virtual MAC to an external switch, it goes to a VM). Of course, everyone was so pumped about the N1Kv (which is very cool), that generally people missed the other design point.

    That describes the “adapter virtualization” for the ethernet stack, there’s similar stuff for the FC part of the FCoE equation.

    Re closeness with EMC, speaking with some authority here Tim – we are VERY close on this – but I am making no implications of exclusivity (and I don’t think anyone is – or should be).

    The Cisco crew aren’t tightly linked with any one party in the whole stack – we are dead serious about “openness and choice” in this effort (no one wants things that limit choice). Just like our efforts with VMware in general, EMC must win on our own merits. What I can say is that there is a geniune shared vision of all three companies, and that our resource investments and focus are… substantial. There’s also a certain element of this which is non-technological.

  3. Chad Sakac’s avatar

    The non-technological point is that virtualization as a whole – and UCS is a manifestation of this – crosses functional organizational boundaries of application/OS/server/network/storage.

    We’re increasingly finding these are CIO-level discussions (actually was asked almost this exact way by during the exec/analyst call), and Cisco, VMware, EMC – along with HP, IBM and Microsoft, but few others operate in the technical realm, but are strategic vendors for the largest customers.

    In the mid-market – there aren’t the same organizational boundaries (for better and for worse)

  4. Chad Sakac’s avatar

    Re-reading my comments, one thing I wasn’t clear on – the day one Menlo adapters have VN-Link, the Palo adapters have the broader CNA virtualization bits.

  5. Chad Sakac’s avatar

    And since a bunch of 3 incoherent comment posts aren’t enough :-) While more customers choose EMC for their storage than anyone else, and our share continues to grow, with strong margins (aka we must be doing SOMETHING right :-) – even if you aren’t an EMC storage customer or an EMC fan in general, there’s also another dimension.

    As Ed Bugnion and I (along with Scott Davis from VMware) covered at VMworld in Cannes here (see about mid-way through TP03). The recorded session is here: http://virtualgeek.typepad.com/virtual_geek/2009/03/vmworld-europe-2009-emc-post-show-report.html

    Take a look at that session today based on today’s announcement. You’ll see this is far more than a server announcement, it’s about a lot more than 10GbE and FCoE. Ed also does an interesting thing halfway though as he points out that our partnership isn’t the simple model people think of on the surface. Cisco uses RSA and EMC Smarts (as an OEM), Cisco and EMC have embedded code in the vmkernel of the vSphere release, VMware is used extenisvely all over the stack (including Cisco and EMC VM appliances).

    Every customer needs to make their own choice – but the depth of integration is substantial – more than just Paul and Joe being part of the launch webcast.

  6. Dmitry’s avatar

    You’re absolutely right on the storage side. That’s the biggest and most complex component in the Enterprose Datacenter today, that Cisco didn’t include in their UCS offering as a component to make it truly Unified.
    The same solution one can (with or without help of System Integrator) achieve today by using components. Their software can make some difference, but it’s still a question.

  7. slowe’s avatar

    Chad,

    Thanks for your comments–even if they are a bit rambling. :-)

    You’re the second person now to tell me that Palo is not based on SR-IOV or MR-IOV, yet neither has been able to tell me what technologies are in play to allow multiple VMs to be attached to a single physical adapter. One person has mentioned that it is Intel VT-c, yet if you check this URL:

    http://www.intel.com/network/connectivity/solutions/virtualization.htm

    You will see that Intel VT-c is actually three different technologies: Intel I/OAT, VMDq, and….lo and behold…PCI-SIG SR-IOV.

    Further, based on what I’ve seen thus far–though I’m sure that you’ve seen a great deal more by virtue of your position–leveraging SR-IOV and eliminating the vSwitch entirely means using something like VMDirectPath, which then introduces challenges of its own around VMotion. So the idea of eliminating the vSwitch and attaching a VM directly to a physical switch port sounds great, but somehow I suspect–at least initially–there’s going to be some “gotchas” that will bite people. Again, feel free to correct me anywhere along the way.

  8. Brad Hedlund’s avatar

    Scott,
    The Palo adapter is absolutely PCI-SIG SR-IOV. Everything in UCS is standards based.

    As for the storage, for a completely stateless computing environment that UCS can provide, Yes, that does imply booting from the SAN.

  9. Chad Sakac’s avatar

    Trying to be more coherent, less rambly here in the morning – from my exposure to it, VN-Link is an extension to ethernet that extends the ethernet frames from the VM itself via the extender to the 6100. Palo is an SR-IOV adapter that allows for a large number (128 was the spec I saw) multiple virtual interfaces (which is particularly important for the FC side of FCoE).

    We’ve got a bunch of UCS systems – they’ve been in elab for a while – I’ll check it out and double confirm.

  10. slowe’s avatar

    Chad,

    From what I understand, I would agree with that statement. VN-Tag (not VN-Link, which is different, AFAIK) is the proposed IEEE spec that carries the VM-specific information through the network (i.e., outside the host), and SR-IOV is the technology used to allow multiple VMs to access the same physical adapter (i.e., inside the host). Two different technologies, serving two different purposes, but used together in this situation.

  11. Brad Hedlund’s avatar

    Scott,
    VN-Link is an umbrella Cisco marketing term that describes granular per virtual machine visibility and policy control. VN-Link can be accomplished in two ways, 1) via software with Nexus 1000V, 2) via Hardware with VN-Tags and SR-IOV.

    So, VN-Link is not different than VN-Tag — rather, VN-Tag is one way to achieve “VN-Link”.

    With Cisco UCS and the Palo adapter, you will be able to achieve “Hypervisor Bypass” (VM Direct Path), where the VM writes directly to its SR-IOV slice of the adapter. The adapter then applies a VN-Tag to uniquely identify the “virtual NIC” the traffic belongs to — VN-Tag acts like a virtual patch cable running from the SR-IOV adapter to the UCS 6100 fabric switch. At this point the virtual machine is managed as if it is connected directly to the UCS 6100 fabric switch.

    Another advantage is much better network I/O performance for virtual machines, more like bare metal. The hypervisor is no longer context switching the data from a vNIC buffer to the adapter buffers, reducing latencies, and improving throughput. Furthermore, given the hypervisor CPU is freed from network I/O responsibilities, you can give back precious CPU cycles for hosting more virtual machines.

    Hope this clarifies.

    Brad

  12. slowe’s avatar

    Brad,

    That’s very helpful information, thanks!

  13. Nik Simpson’s avatar

    I think it’s important to note that in all this discussion of SR-IOV, VN-Tag, VT-c, VT-D (and the rest of the alphabet soup) there is nothing that other vendors can’t leverage (assuming VN-Tag is standardized). In fact, HP is already offering an SR-IOV 10 GbE adapter for it’s C-class blades and has been doing so for 6 months. So at the end of the day it will come down to who makes this stuff the most transparent to poor sys admins tasked with tying it all together.

  14. slowe’s avatar

    Nik,

    You are quite correct–there is nothing stopping any other vendor from incorporating these technologies into their own products. Cisco appears, based on what we’ve seen thus far, to have done a great job with your second point–making it transparent to the end-user community–although it is still quite early to tell yet.

  15. Brad Hedlund’s avatar

    Nik,
    The HP Flex-NIC you must be referring to is not SR-IOV based. Rather, Flex-10 and Flex-NIC is HP proprietary technology.

  16. Chad Sakac’s avatar

    Brad – one thing to clarify my SR-IOV comment – SR-IOV is dependent in VMware land to use of VMDirectPath IO (which in gen 1 has significant limits, designed to be lifted in gen2). Cisco has been working with VMware to have a slightly accelerated functional capability here (using the underlying SR-IOV model) prior to VMDirectPath IO gen2.

  17. bob’s avatar

    One aspect that was touched on but is still not clear to me is the compatibility of VMDirectPath and vMotion. VMDirectPath presumably interfaces directly with the native VF (sr-iov virtual function, ie a physical interface). vMotion is possible because ESX was abstracting physical I/O interfaces used by the VM. How is vMotion accomplished if the VM is accessing the native/physical interface directly?

  18. Brad Hedlund’s avatar

    @bob – A standard driver could be used, or NIC vendor specific “Plugins”.

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>