Using IP-Based Storage with VMware vSphere on Cisco UCS

I had a reader contact me with a couple of questions, one of which I felt warranted a blog post. Paraphrased, the question was this: How do I make IP-based storage work with VMware vSphere on Cisco UCS?

At first glance, you might look at this question and scoff. Remember though, that Cisco UCS does—at this time—have a few limitations that make this a bit more complicated than at first glance. Specifically:

  • Recall that the UCS 6100XP fabric interconnects only have two kinds of ports: server ports and uplink ports.
  • Server ports are southbound, meaning they can only connect to the I/O Modules running in the back of the blade chassis.
  • Uplink ports are northbound, meaning they can only connect to an upstream switch. They cannot be used to connect directly to another end host or directly to storage.

With this in mind, then, how does one connect IP-based storage to a Cisco UCS? In these scenarios, you must have another set of Ethernet switches between the 6100XP fabric interconnects and the target storage array. Further, since the 6100XP fabric interconnects require 10GbE uplinks and do not—at this time—offer any 1GbE uplink functionality, you need to have the right switches between the 6100XP fabric interconnects and the target storage array.

Naturally, the Nexus 5000 fits the bill quite nicely. You can use a pair of Nexus 5000 switches between the UCS 6100XP interconnects and the storage array. Dual-connect the 6100XP interconnects to the Nexus 5000 switches for redundancy and active-active data connections, and dual-connect the target storage array to the Nexus 5000 switches for redundancy and (depending upon the array) active-active data connections. It would look something like this:


From the VMware side of the house, since you’re using 10GbE end-to-end, it’s very unlikely that you’ll need to worry about bandwidth; that eliminates any concerns over multiple VMkernel ports on multiple subnets or using multiple NFS targets so as to be able to use link aggregation. (I’m not entirely sure you could use link aggregation with the 6100XP interconnects anyway. Anyone?) However, since you are talking Cisco UCS you’ll have only two 10GbE connections (unless you’re using the full width blade, which is unlikely). This means you’ll need to pay careful attention to the VMware vSwitch (or dvSwitch, or Nexus 1000V) configuration. In general, the recommendation in this sort of configuration is to place Service Console, VMotion, and IP-based storage traffic on one 10GbE uplink, place virtual machine traffic on the second 10GbE uplink, and use whatever mechanisms are available to preferentially specify which uplink should be used in the course of normal operation. This provides redundancy in the uplinks but some level of separation of traffic.

One quick side note: although I’m talking IP-based storage here, block-based storage fans need to remember that Cisco UCS does not—at this time—support northbound FCoE. That means that although you have FCoE support southbound, and FCoE support in the Nexus 5000, and possibly FCoE support in your storage arrays, you still can’t do end-to-end FCoE with Cisco UCS.

For those readers who are very familiar with Cisco UCS and Nexus, this will seem like a pretty simplistic post. However, we need to keep in mind that there are lots of readers out there who have not had the same level of exposure. Hopefully, this will help provide some guidance and food for thought.

(Of course, one could just buy a Vblock and not have to worry about putting all the pieces together…hey, can’t blame me for trying, right?)

Clarifications, questions, or suggestions are welcome in the comments below. Thanks!

Tags: , , , , , , ,


  1. Aaron Delp’s avatar

    Hey Scott! A couple of comments on the post. According to the UCS Book, you can aggregate links both northbound and southbound. I have never done it but it appears to be possible.

    How would this change for iSCSI? Would it? I’m thinking it won’t but I was wondering if you heard anything else.

    Lastly, a word of caution to everyone using ip-based storage that you and I have covered many times already. In this model you CAN NOT use stateless blades unless you introduce FC into this design. The only supported option by VMware today is to boot ESXi on local drives or ESX on local drives or FC boot. This is more of an advanced topic but it is worth noting in the above design in my opinion.

    Great post!

  2. Craig’s avatar

    Many users are switching away from FC to ISCSI or NAS due to the cost of FC SAN implementation. With Cisco UCS architecture, the FC uplink help us to save the unnecessary fiber connections and reduce the FC switches require to run in the environment. I spoken to many of the clients, they will prefer to stick with FC in this case as they no longer populating 2 FC connections per server as what they are doing today.

  3. Paul Richards’s avatar

    Hi Scott,
    Great post, and timely too since I’m actually in the process of configuring some IP-based storage for some vSphere hosts on the UCS.

    I just wanted to point out that your network config listed here is based on the current Menlo adapters. When the Palo adapters are available, you will be able to break up the traffic and manage it more effectively.

    Another thing to consider is the number of server uplinks to have per 5108. If you plan to use NFS datastores, you may want to increase the number of server uplink ports. Keep in mind that has an impact on how many chassis can be connected to the current 6100s.

    And yes, you can absolutely purchase a VBlock and forget about NFS completely! :)


  4. AFidel’s avatar

    You still might have to deal with multiple NFS targets due to the fact that internal data structures might not allow sufficient performance even if there’s plenty of bandwidth from both disks and interconnect.

  5. slowe’s avatar


    Great info on the link aggregation southbound! Thanks for letting me know that. Is that in the book I loaned you? Bookmark the page for me!

    No change as far as I can see with regards to iSCSI, which is why I marked this for IP-based storage and not just NFS.

    And you’re absolutely right about the stateless requirements…


    I agree—the cost of running an FC infrastructure is much less with UCS due to the leverage of FCoE within the system.

    Paul Richards,

    You are correct—but the discussion of the impact of the VIC on VMware vSphere designs is a much greater discussion than could be included here. It is a topic I hope to blog on very soon.

    Your point about per-chassis uplinks is also well-taken. It’s definitely an important design consideration. Perhaps I should consider writing a UCS design blog post…anyone interested in that?


    You are correct about the possibility of needing multiple NFS exports, but that would be due to storage constraints not networking constraints. Again, that’s an entirely separate discussion. Still, an important point to make, so thank you.

    Great comments everyone!

  6. Rodos’s avatar

    Scott, so now you are at EMC you are admitting you are dumbing down your posts as well as putting in sales pitches. :) Could not resist.

    There are a lot of design issues you are going to want to consider, more that can go into a comment.

    To really do this well you need to understand pinning (assuming you are doing the right thing and using end host mode), work through using virtual port channel north bound (10G may be a lot but how many chassis are there and how much traffic is north bound), how and where you want to do separation and monitoring.

    You are also gong to want to consider how much bandwidth you need between the two fabrics (A/B) as you scale out. Remember there is no interface between the two fabric interconnects (apart from those to maintain the UCSM cluster) so any traffic that goes across fabrics needs to be planned for.

    Okay, now I am rambling. Be nice to see some further UCS discussions at any level of technical detail.


  7. Aaron Delp’s avatar

    Hey Scott – I like the way you think. I did an article on Cisco UCS and the number of FEX uplinks and how it affects chassis bandwidth, chassis maximum, and number of vNICs with Palo. Check it out (shameless plug)

    Also, the southbound link aggregation statement is in the UCS book we received when we went to visit TAC that one day. I’ll bring it to lunch to show you. I also have your class book to give back. Thanks again!


  8. Jose Ruelas’s avatar

    Would it be possible to share the name of the book mentioned ?? I am very interested to have a good read about the southbound link aggregation. (been told that it is not possible)
    kind regards
    Jose Ruelas

  9. slowe’s avatar


    The book mentioned is this one:

    It was written by the engineers who helped to design UCS when it was still referred to as “Project California”. Definitely a must-read if you are going to be working with UCS in any significance.

    Hope this helps!

  10. Sal Collora’s avatar


    In your picture you have the FIs redundantly connected to the upstream 5Ks. I am not sure these connections are required or are useful. With the failover feature of the FIs, it’s my understanding that you might actually want them with LEFT-LEFT and RIGHT-RIGHT connections. The failover feature, to my knowledge, will sense an upstream failure, and take down the correct side. It would be interesting to couple this feature with an upstream vPC configured on the 5Ks, and an etherchannel configured on the FIs. I intend to test this when my lab is up and running, but it’s something to consider. All of my designs use a single (or channel) link configured LEFT-LEFT and RIGHT-RIGHT. What do you think?

  11. Chris’s avatar

    For NetApp (FAS 3140) why cant I just trunk the northbound connection of the 6100 into a VLAN truck connection on the NetApp?

    Its my understanding from Cisco that the northbound ports are trunk ports and not access ports and so your storage must be able to support turnk ports, which NetApp does.

  12. Saju’s avatar


    I came to know internally from Cisco that we have tested EMC storage directly connecting to FI.

    “Direct connection has not been tested and is not a supported topology at this time outside of EMC storage where the testing has been done.”


  13. Richard Gray’s avatar

    Hi Scott,

    Couple of questions re: the UCS. I have finished the spec for a HP Blade system using VirtualConnect but I am now looking at this Cisco kit also. However I know little about it right now.

    Could you use a couple of Catalyst switches instead of the Nexus 5000′s northbound of the UCS? Say 3750′s with 10GbE links. That would take care of my IP storage and I could aggregate ports from each 3750 for connection to my SAN (2x1GbE links from a NetApp as a VIF per head – or possibly invest in 10GbE cards for it).

    I also use FC for RDMs, how would I go about hooking this into my NetApp? Is this when I would need to start using FCoE? But, I supose my 3750′s wouldnt support this either? I know our budget wont stretch to include 2xnexus 5000s! I’m a little lost here.

    Many Thanks,

  14. Shawn Saunders’s avatar

    The book mentioned seems to be no longer available. Would you share the name so we can look elsewhere?

  15. slowe’s avatar

    Richard, you could use Catalyst 3750s with 10GbE uplinks to handle IP-based traffic out of the UCS. You’d also need to deploy MDS (or some other FC switch) to handle the FC traffic coming out of the UCS. Until the release of UCS 1.4 (just within the last couple of days), you couldn’t run FCoE out of the UCS and you had to use FC. Even now, you might still use to use FC northbound out of the UCS instead of FCoE, which means that a combination of 10GbE-equipped Catalyst 3750s and MDS Fibre Channel switches will work.

    Shawn, it’s been replaced with a newer version. You should be able to find it on Amazon.

  16. Robert Maxwell’s avatar

    I also would like to know the name of the UCS book as LULU doesn’t have it.


  17. slowe’s avatar

    Robert Maxwell and others, for those interested in the latest version/edition of the Cisco UCS book, here’s what you need:

    This is the updated version that replaces the earlier “black book” available through Lulu. As with the first edition, this version is most definitely worth reading.

  18. Burak’s avatar

    hi all,

    i wonder can i use Cat 3750 for connecting my 6120 interconnects to ethernet network?if i can then which SFPs i must use?


Comments are now closed.