You are currently browsing articles tagged UCS.

Welcome to Technology Short Take #29! This is another installation in my irregularly-published series of links, thoughts, rants, and raves across various data center-related fields of technology. As always, I hope you find something useful here.


  • Who out there has played around with Mininet yet? Looks like this is another tool I need to add to my toolbox as I continue to explore networking technologies like OpenFlow, Open vSwitch, and others.
  • William Lam has a recent post on some useful VXLAN commands found in ESXCLI with vSphere 5.1. I’m a CLI fan, so I like this sort of stuff.
  • I still have a lot to learn about OpenFlow and networking, but this article from June of last year (it appears to have been written by Ivan Pepelnjak) discusses some of the potential scalability concerns around early versions of the OpenFlow protocol. In particular, the use of OpenFlow to perform granular per-flow control when there are thousands (or maybe only hundreds) of flows presents a scalability challenge (for now, at least). In my mind, this isn’t an indictment of OpenFlow, but rather an indictment of the way that OpenFlow is being used. I think that’s the point Ivan tried to make as well—it’s the architecture and how OpenFlow is used that makes a difference. (Is that a reasonable summary, Ivan?)
  • Brad Hedlund (who will be my co-worker starting on 2/11) created a great explanation of network virtualization that clearly breaks down the components and explains their purpose and function. Great job, Brad.
  • One of the things I like about Open vSwitch (OVS) is that it is so incredibly versatile. Case in point: here’s a post on using OVS to connect LXC containers running on different hosts via GRE tunnels. Handy!


  • Cisco UCS is pretty cool in that it makes automation of compute hardware easier through such abstractions as server profiles. Now, you can also automate UCS with Chef. I traded a few tweets with some Puppet folks, and they indicated they’re looking at this as well.
  • Speaking of Puppet and hardware, I also saw a mention on Twitter about a Puppet module that will manage the configuration of a NetApp filer. Does anyone have a URL with more information on that?
  • Continuing the thread on configuration management systems running on non-compute hardware (I suppose this shouldn’t be under the “Servers/Hardware” section any longer!), I also found references to running CFEngine on network apliances and running Chef on Arista switches. That’s kind of cool. What kind of coolness would result from even greater integration between an SDN controller and a declarative configuration management tool? Hmmm…


  • Want full-disk encryption in Ubuntu, using AES-XTS-PLAIN64? Here’s a detailed write-up on how to do it.
  • In posts and talks I’ve given about personal productivity, I’ve spoken about the need to minimize “friction,” that unspoken drag that makes certain tasks or workflows more difficult and harder to adopt. Tal Klein has a great post on how friction comes into play with security as well.

Cloud Computing/Cloud Management

  • If you, like me, are constantly on the search for more quality information on OpenStack and its components, then you’ll probably find this post on getting Cinder up and running to be helpful. (I did, at least.)
  • Mirantis—recently the recipient of $10 million in funding from various sources—posted a write-up in late November 2012 on troubleshooting some DNS and DHCP service configuration issues in OpenStack Nova. The post is a bit specific to work Mirantis did in integrating an InfoBlox appliance into OpenStack, but might be useful in other situation as well.
  • I found this article on Packstack, a tool used to transform Fedora 17/18, CentOS 6, or RHEL 6 servers into a working OpenStack deployment (Folsom). It seems to me that lots of people understand that getting an OpenStack cloud up and running is a bit more difficult than it should be, and are therefore focusing efforts on making it easier.
  • DevStack is another proof point of the effort going into make it easier to get OpenStack up and running, although the focus for DevStack is on single-host development environments (typically virtual themselves). Here’s one write-up on DevStack; here’s another one by Cody Bunch, and yet another one by the inimitable Brent Salisbury.

Operating Systems/Applications

  • If you’re interested in learning Puppet, there are a great many resources out there; in fact, I’ve already mentioned many of them in previous posts. I recently came across these Example42 Puppet Tutorials. I haven’t had the chance to review them myself yet, but it looks like they might be a useful resource as well.
  • Speaking of Puppet, the puppet-lint tool is very handy for ensuring that your Puppet manifest syntax is correct and follows the style guidelines. The tool has recently been updated to help fix issues as well. Read here for more information.


  • Greg Schulz (aka StorageIO) has a couple of VMware storage tips posts you might find useful reading. Part 1 is here, part 2 is here. Enjoy!
  • Amar Kapadia suggests that adding LTFS to Swift might create an offering that could give AWS Glacier a real run for the money.
  • Gluster interests me. Perhaps it shouldn’t, but it does. For example, the idea of hosting VMs on Gluster (similar to the setup described here) seems quite interesting, and the work being done to integrate KVM/QEMU with Gluster also looks promising. If I can ever get my home lab into the right shape, I’m going to do some testing with this. Anyone done anything with Gluster?
  • Erik Smith has a very informative write-up on why FIP snooping is important when using FCoE.
  • Via this post on ten useful OpenStack Swift features, I found this page on how to build the “Swift All in One,” a useful VM for learning all about Swift.


  • There’s no GUI for it, but it’s kind of cool that you can indeed create VM anti-affinity rules in Hyper-V using PowerShell. This is another example of how Hyper-V continues to get more competent. Ignore Microsoft and Hyper-V at your own risk…
  • Frank Denneman takes a quick look at using user-defined NetIOC network resource pools to isolate and protect IP-based storage traffic from within the guest (i.e., using NFS or iSCSI from within the guest OS, not through the VMkernel). Naturally, this technique could be used to “protect” or “enhance” other types of important traffic flows to/from your guest OS instances as well.
  • Andre Leibovici has a brief write-up on the PowerShell module for the Nicira Network Virtualization Platform (NVP). Interesting stuff…
  • This write-up by Falko Timme on using BoxGrinder to create virtual appliances for KVM was interesting. I might have to take a look at BoxGrinder and see what it’s all about.
  • In case you hadn’t heard, OVF 2.0 has been announced/released by the DMTF. Winston Bumpus of VMware’s Office of the CTO has more information in this post. I also found the OVF 2.0 frequently asked questions (FAQs) to be helpful. Of course, the real question is how long it will be before vendors add support for OVF 2.0, and how extensive that support actually is.

And that’s it for this time around! Feel free to share your thoughts, suggestions, clarifications, or corrections in the comments below. I encourage your feedback, and thanks for reading.

Tags: , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #18! I hope you find something useful in this collection of networking, OS, storage, and virtualization links. Enjoy!


The number of articles in my “Networking” bucket continues to overflow; I have so many articles on so many topics (soft switching, OpenFlow, Open vSwitch, MPLS) that it’s hard to get my head wrapped around all of it. Here are a few posts that stuck out to me:

  • Ivan Pepelnjak has a very well-written post explaining the various ways that virtual networking can be decoupled from the physical network.
  • I stumbled across a trio of articles by Denton Gentry on hash tables (part 1, part 2, and part 3). This is an interesting perspective I hadn’t considered before; as we move more into software-defined networks (SDNs), why are we continuing to use the same mechanisms as before? Why not take advantage of more efficient mechanisms as part of this transition?

Servers/Operating Systems

  • Nigel Poulton and I traded a few tweets during HP Discover Vienna about SCSI Express (or SCSI over PCIe, SoP). He wrote up his thoughts about SoP and its future in the storage industry here. Further Twitter-based discussions about fabrics led him to say that HP buying Xsigo would bring the competition back against UCS. I’m not so sure I agree. Xsigo’s server fabric technology/product is interesting, but it seems to me that it’s still adding layers of abstraction that aren’t necessary. As SR-IOV, MR-IOV, and PCIe extension matures, it seems to me that Ethernet as the fabric is going to win. If that’s the case, and HP wants to bring the hurt against UCS, they’re going to have to invest in Ethernet-based fabrics.
  • Speaking of UCS, here’s a “how to” on deploying the UCS Platform Emulator on vSphere. You might also like the UCS PE configuration follow-up post.
  • Here’s what looks to be a handy Mac OS X utility to track how long until your Active Directory password expires. Sounds simple, yes, but useful.



  • Jason Boche, after some collaboration with Bob Plankers, wrote up a good procedure for expanding the vCloud Director Transfer Server storage space. It’s definitely worth a read if you’re going to be working with vCloud Director.
  • Microsoft has released version 3.2 of the Linux Integration Services for Hyper-V. The new release adds integrated mouse support, updated network drivers, and fixes an issue with SCVMM compatibility.
  • Julian Wood, who I had the opportunity to meet in Copenhagen at VMworld 2011, has published a four-part series on managing vSphere 5 certificates. Follow these links for the series: part 1, part 2, part 3, and part 4.
  • Thinking of deploying Oracle on vSphere? You should probably read this three-part series from VMware’s Business Critical Applications blog: part 1 is here, part 2 is here, and part 3 is here.
  • I’m so used to dealing with VLANs in a vSphere environment, I didn’t consider the challenges that might come up when using them with VMware Workstation. Fortunately, this author did—read his post on mapping VLANs to VMnets in VMware Workstation.
  • I thought that this article on virtual disks with business critical applications would be a deep dive on which virtual disk formats (thin, lazy zeroed, eager zeroed) are best suited for various applications. While the article does discuss the different virtual disk formats, unfortunately that’s as far as it goes.
  • Fellow VMware vSphere Design co-author Forbes Guthrie highlights an important design concern with AutoDeploy: what about a virtual vCenter instance? Read his full article for the in-depth discussion.
  • This post by William Lam gives a good overview of when vSphere MoRefs change (or don’t change).
  • Here’s a good explanation why NIC teaming can’t be used with iSCSI binding.
  • Cormac Hogan also posted a nice overview of some new vmkfstools enhancements in vSphere 5.
  • Terence Luk posts a detailed procedure to help recover VMware Site Recovery Manager in the event of a failure of one of the SRM servers. Good information—thanks Terence!

And that’s it for this time around. Feel free to add your thoughts in the comments below—all comments are welcome! (Please provide full disclosure of vendor affiliations/employment where applicable. Thanks!)

Tags: , , , , , , , ,

This is BRKCOM-3002, Network Redundancy and Load Balancing Designs for UCS Blade Servers, presented by none other than M. Sean McGee (available as @mseanmcgee on Twitter; I highly recommend you follow him on Twitter if you are interested in UCS). I was really looking forward to this presentation, as I’ve spoken with Sean before and I know that he is a super-sharp UCS guru. Sean also blogs here.

Sean starts out the presentation by asking, “Do you know how UCS networking thinks?” This sets the stage for how he is going to explain how frames move through UCS, both northbound and southbound. So part of his goal today is to teach us how UCS networking thinks.

A quick outline of the presentation is:

  • A review of components (fabric interconnects and extenders, converged network adapters)
  • A look at network path control mechanisms

Sean spends a few minutes reviewing the high-level architecture of the UCS and some of the differentiation between UCS and other typical blade architectures. He reminds the attendees that the fabric interconnects are more than just switches; they are the management for the entire system (the fabric interconnects are where UCS Manager runs). UCS Manager offers both high-level views of UCS as well as very detailed views of UCS. Sean also spends a bit of time reviewing the idea of service profiles as an encapsulation of the server’s identity and the central role that service profiles play in a UCS environment.

Over the next few minutes Sean discusses the trade-offs that come with blade servers (decreased server visibility and increased points of management) and how that plays into Cisco’s approach for UCS. The use of fabric extenders is key to eliminating some of the visibility loss and reducing points of management. This leads into an extended discussion of the fabric interconnects and fabric extenders and the relationship between them.

Next, a brief overview of the CNAs available for the UCS leads to a discussion of the VIC (Palo) and the use of Cisco’s FEX technology in virtualized environments. Here are the different FEX “types”:

  • Rack FEX: This is a typical N5K/N2K sort of deployment.
  • Chassis FEX: This is what UCS uses: UCS 61xx plus fabric extenders in the blade chassis.
  • Adapter FEX: This is VIC (Palo), creating virtual interfaces all the way into an operating system (any OS for which Cisco provides VIC/Palo drivers).
  • VM-FEX: This is the use of VIC (Palo) plus a management link between UCS and vCenter Server to connect VMs directly to the VIC, essentially “bypassing” the hypervisor. (As a side note: this has numerous design considerations that you have to consider with regards to vSphere.)

Now Sean opens up some new stuff…

A new fabric interconnect is in the works; the Cisco 6248UP fabric interconnect. This has more ports and supports Unified Ports. The Unified Port support is very cool; that allows any of the 48 ports to be either Ethernet or Fibre Channel.

A new fabric extender is also being released; the UCS 2208XP fabric extender (goes in the chassis). The 2208XP fabric extender will offer 8 uplinks to the fabric interconnects (2x the uplinks of the 1st gen UCS fabric extender). The new fabric extender also offers more downlinks to individual servers (4x the downlinks versus previous generations). You can also use port-channeling between the fabric extender and the fabric interconnects.

Cisco is also releasing the VIC 1280, a new generation of the VIC (Palo) adapter. This offers dual 4x 10GbE uplinks for up to 80Gbps per host. The VIC 1280 can also present up to 116 interfaces to the OS (or hypervisor). The VIC 1280 can also use port-channeling up to the fabric extender, which was not supported in previous generations.

All of these components are fully interoperable with previous generation versions of the components.

Sean now switches gears to focus on the behaviors of UCS networking and “how UCS networking thinks.”

The first topic is the network path control mechanisms. There are several mechanisms in place, depending on “where” in the networking stack they are applicable. For example, at the “edge” of UCS there are mechanisms like Spanning Tree Protocol versus border port pinning (affected by End Host Mode or Switch Mode on the fabric interconnects) only operates at the fabric interconnect level.

Sean spends a few minutes explaining the differences between a traditional switch and a vSwitch. For example, vSwitches don’t do MAC learning and don’t run STP; traditional switches don’t receive broadcasts only on one port and don’t use host pinning for load balancing.

Sean then uses that discussion to frame the difference between UCS fabric interconnects in Switch Mode (acting like a traditional switch) versus End Host Mode (acting like a vSwitch). End Host Mode makes the fabric interconnects “look” and “act” like a vSwitch. You can use active/active uplinks without using port channeling and the fabric interconnect uses pinning to attach servers to uplinks. Overall, it was very good comparison.

Border port pinning is what UCS uses to load balance traffic from servers onto the fabric interconnect uplinks. Individual blades are either dynamically pinned to an uplink or statically (manually) pinned to an uplink. Traffic on the same VLAN and the same fabric interconnect is locally switched. All uplinks are active (because of End Host Mode) and there’s no concern for loops and no need for port channeling. You can use port channeling for uplinks, and those port channels can be used with either dynamic or static pinning.

Some best practices for pinning:

  • Use dynamic pinning unless there is a specific reason to use static pinning.
  • Use port channels where possible for improved redundancy.

UCS also offers fabric port pinning (the way in which the fabric extender assigns uplinks). There are two modes: discrete mode and port channel mode. In discrete mode, you can use Fabric Failover (more info later) to assign a primary fabric path and a backup fabric path. This would allow you to modify the assignment of a server to a fabric extender uplink. You could also use NIC teaming in the OS (or hypervisor) to do the same thing and direct traffic to one port versus another port.

Port channel mode (only supported on the Gen2 components) changes all that behavior; the uplinks between the fabric extenders and the fabirc interconnects can be used by any server in the chassis. This is a nice improvement, in my opinion. Port channel mode is selected in the Chassis Discovery Policy in UCS Manager.

Port channel mode versus discrete mode is selected on a per-chassis basis; you can have different chassis with different settings. This offers some flexibility for customers who might want more specific control over traffic placement.

The Gen2 CNA (the VIC 1280) and the Gen2 fabric extender (UCS 2208XP) can also use port channeling between them (this is new).

Sean then walked the attendees though various combinations of Gen1 and Gen2 hardware with both discrete mode and port channel mode. His recommended practice (assuming you have all Gen2 hardware) is to use port channel mode between both the VIC 1280 and the fabric extender and between the fabric extender and the fabric interconnect.

Next up is a discussion of fabric failover and how it behaves and acts. For customers that are concerned about only a single CNA, you can deploy full-width blades with dual CNAs. Naturally, you configure fabric failover within a service profile. Only the Gen1 Menlo cards and Palo cards support fabric failover. Note that you can use fabric failover in conjunction with VM-FEX (pushing virtual interfaces all the way to VMs) to help control which VMs communicate over which fabric.

OS NIC teaming is also an option for directing/controlling traffic within a UCS environment. As with all other things, there are advantages and disadvantages of both approaches, so plan carefully.

Sean wraps up the session with a review of the decision process UCS follows to drive a frame northbound out of UCS, and then the same decision process when forwarding frames southbound.

All in all, this was an excellent session with lots of useful information. A fair amount of it was review for me, but still useful to attend nevertheless.

Tags: , , ,

Now that I’ve published the Storage Edition of Technology Short Take #12, it’s time for the Networking Edition. Enjoy, and I hope you find something useful!

  • Ron Fuller’s ongoing deep dive series on OTV (Overlay Transport Virtualization) has been great for me. I knew about the basics of OTV, but Ron’s articles really gave me a better understanding of the technology. Check out the first three articles here: part 1, part 2, and part 3.
  • Similarly, Joe Onisick’s two-part (so far) series on inter-fabric traffic on Cisco UCS is very helpful and informative as well. There are definitely some design considerations that come about from deploying VMware vSphere on Cisco UCS. Have a look at Joe’s articles on his site (Part 1 and Part 2).
  • Kurt Bales’ article on innovation vs. standardization is a great read. The key, in my mind, is innovating (releasing “non-standard” stuff) while also working with the broader community to help encourage standardization around that innovation.
  • Here’s another great multi-part series, this time from Brian Feeny on NX-OS (part 1 here, and part 2 here). Brian exposes some pretty interesting stuff in the NX-OS kickstart and system image.
  • I’ve discussed LISP a little bit here and there, but Greg Ferro reminds us that LISP isn’t a “done deal.”
  • J Metz wrote a good article on the interaction (or lack thereof, depending on how you look at it) between FCoE and TRILL.
  • For a non-networking geek like me, some great resources to become more familiar with TRILL might include this comparison of 802.1aq and TRILL, this explanation from RFC 5556, this discussion of TRILL-STP integration, or this explanation using north-south/east-west terminology. Brad Hedlund’s TRILL write-up from a year ago is also helpful, in my opinion. All of these are great resources, in my mind.
  • And as if understanding TRILL, or the differences between TRILL and FabricPath weren’t enough (see this discussion by Ron Fuller on the topic), then we have 802.1aq Shortest Path Bridging (SPB) thrown in for good measure, too. If it’s hard for networking experts to keep up with all these developments, think about the non-networking folks like me!
  • Ivan Pepelnjak’s examination of vCDNI-based private networks via Wireshark traces exposes some notable scalability limitations. It makes me wonder, as Ivan does, why VMware chose to use this method versus something more widely used and well-proven, like MPLS? And isn’t there an existing standard for MAC-in-MAC encapsulation? Why didn’t VMware use that existing standard? Perhaps it goes back to innovation vs. standardization again?
  • If you’re interested in more details on vCDNI networks, check out this post by Kamau Wanguhu.
  • Omar Sultan of Cisco has a quick post on OpenFlow and Cisco’s participation here.
  • Jake Howering of Cisco (nice guy, met him a few times) has a write-up on an interesting combination of technologies: ACE (load balancing) plus OTV (data center interconnect), with a small dash of VMware vCenter API integration.

I think that’s going to do it for this Networking Edition of Technology Short Take #12. I’d love to hear your thoughts, suggestions, or corrections about anything I’ve mentioned here, so feel free to join the discussion in the comments. Thanks for reading!

Tags: , , , ,

A little over a month ago, I was installing VMware ESXi on a Cisco UCS blade and noticed something odd during the installation. I posted a tweet about the incident. Here’s the text of the tweet in case the link above stops working:

Interesting…this #UCS blade has local disks but all disks are showing as remote during #ESXi install. Odd…

Several people responded, indicating they’d run into similar situations. No one—at least, not that I recall—was able to tell me why this was occurring, only that they’d seen it happen before. And it wasn’t just limited to Cisco UCS blades; a few people posted that they’d seen the behavior with other hardware, too.

This morning, I think I found the answer. While reading this post about scratch partition best practices on VMware ESXi Chronicles, I clicked through to a VMware KB article referenced in the post. The KB article discussed all the various ways to set the persistent scratch location for ESXi. (Good article, by the way. Here’s a link.)

What really caught my attention, though, was a little blurb at the bottom of the KB article in reference to examples where scratch space may not be automatically defined on persistent storage. Check this out (emphasis mine):

2.  ESXi deployed in a Boot from SAN configuration or to a SAS device. A Boot from SAN or SAS LUN is considered Remote, and could potentially be shared among multiple ESXi hosts. Remote devices are not used for scratch to avoid collisions between multiple ESXi hosts.

There’s the answer: although these drives are physically inside the server and are local to the server, they are considered remote during the VMware ESXi installation because they are SAS drives. Mystery solved!

Tags: , , , , ,

How’s that for acronyms?

In all seriousness, though, as I was installing VMware ESXi this evening onto some remote Cisco UCS blades, I ran into some interesting keymapping issues and I thought it might be handy to document what worked for me in the event others run into this issue as well.

So here’s the scenario: I’m running Mac OS X 10.6.7 on my MacBook Pro, and using VMware View 4.6 to connect to a remote Windows XP Professional desktop. Within that Windows XP Professional session, I’m running Cisco UCS Manager 1.4(1i) and loading up the KVM console to access the UCS blades. From there, I’m installing VMware ESXi onto the blades from a mapped ISO file.

What I found is that the following keystrokes worked correctly to pass through these various layers to the ESXi install process:

  • For the F2 key (necessary to log in to the ESXi DCUI), use Ctrl+F2 (in some places) or Cmd+F2 (in other places).
  • For the F5 key (to refresh various displays), the F5 key alone works.
  • For the F11 key (to confirm installation at various points during the ESXi install process), use Cmd+F11.
  • For the F12 key (used at the DCUI to shutdown/reboot), use Cmd+F12.

There are a couple of factors that might affect this behavior:

  • In the Keyboard section of System Preferences, I have “Use F1, F2, etc., keys as standard function keys” selected; this means that I have to use the Fn key to access any “special” features of the function keys (like increasing volume or adjusting screen brightness). I haven’t tested what impact this has on this key mapping behavior.
  • The Mac keyboard shortcuts in the preferences of the Microsoft Remote Desktop Connection do not appear to conflict with any of the keystrokes listed above, so it doesn’t appear that this is part of the issue.

If I find more information, or if I figure out why the keystrokes are mapping the way they are I’ll post an update to this article. In the meantime, if you happen to need to install VMware ESXi into a Cisco UCS blade via the UCSM KVM through VMware View from a Mac OS X endpoint, now you know how to make the keyboard shortcuts work.

Courteous comments are always welcome—speak up and contribute to the discussion!

Tags: , , , , , ,

That’s right folks, it’s time for another installation of Technology Short Takes. This is Technology Short Take #11, and I hope that you find the collection of items I have for you this time around both useful and informative. But enough of the introduction—let’s get to the good stuff!


  • David Davis (of Train Signal) has a good write-up on the Petri IT Knowledgebase on using a network packet analyzer with VMware vSphere. The key, of course, is enabling promiscuous mode. Read the article for full details.
  • Jason Nash instructs you on how to enable jumbo frames on the Nexus 1000V, in the event you’re interested. Jason also has good coverage of the latest release of the Nexus 1000V; worth reading in my opinion. Personally, I’d like Cisco to get to version numbers that are a bit simpler than 4.2(1) SV1(4).
  • Now here’s a link that is truly useful: Greg Ferro has put together a list of Cisco IOS CLI shortcuts. That’s good stuff!
  • There are a number of reasons why I have come to generally recommend against link aggregation in VMware vSphere environments, and Ivan Pepelnjak exposes another one that rears its head in multi-switch environments in this article. With the ability for vSphere to utilize all the uplinks without link aggregation, the need to use link aggregation isn’t nearly as paramount, and avoiding it also helps you avoid some other issues as well.
  • Ivan also tackles the layer 2 vs. layer 3 discussion, but that’s beyond my pay grade. If you’re a networking guru, then this discussion is probably more your style.
  • This VMware KB article, surprisingly enough, seems like a pretty good introduction to private VLANs and how they work. If you’re not familiar with PVLANs, you might want to give this a read.


  • Want to become more familiar with Cisco UCS, but don’t have an actual UCS to use? Don’t feel bad, I don’t either. But you can use the Cisco UCS Emulator, which is described in a bit more detail by Marcel here. Very handy!


  • Ever find yourself locked out of your CLARiiON because you don’t know or can’t remember the username and password? OK, maybe not (unless you inherited a system from your predecessor), but in those instances this post by Brian Norris will come in handy.
  • Fabio Rapposelli posted a good write-up on the internals of SSDs, in case you weren’t already aware of how they worked. As SSDs gain traction in many different areas of storage, knowing how SSDs work helps you understand where they are useful and where they aren’t.
  • Readers that are new to the storage space might find this post on SAN terminology helpful. It is a bit specific to Cisco’s Nexus platform, but the terms are useful to know nevertheless.
  • If you like’s EMC’s Celerra VSA, you’ll also like the new Uber VSA Guide. See this post over at Nick’s site for more details.
  • Fellow vSpecialist Tom Twyman posted a good write-up on installing PowerPath/VE. It’s worth reading if you’re considering PP/VE for your environment.
  • Joe Kelly of Varrow posted a quick write-up about VPLEX and RecoverPoint, in which he outlines one potential issue with interoperability between VPLEX and RecoverPoint: how will VPLEX data mobility affect RP? For now, you do need to be aware of this potential issue. For more information on VPLEX and RecoverPoint together, I’d also suggest having a look at my write-up on the subject.
  • I won’t get involved in the discussion around Open FCoE (the software FCoE stack announced a while back); plenty of others (J Metz speaks out here, Chad Sakac weighed in here, Ivan Pepelnjak offers his opinions here, and Wikibon here) have already thrown in. Instead, I’ll take the “Show me” approach. Intel has graciously offered me two X520 adapters, which I’ll run in my lab next to some Emulex CNAs. From there, we’ll see what the differences are under the same workloads. Look for more details from that testing in the next couple of months (sorry, I have a lot of other projects on my plate).
  • Jason Boche has been working with Unisphere, and apparently he likes the Unisphere-VMware integration (he’s not alone). Check out his write-up here.


  • For the most part, a lot of people don’t have to deal with SCSI reservation conflicts any longer. However, they can happen (especially in older VMware Infrastructure 3.x environments), and in this post Sander Daems provides some great information on detecting and resolving SCSI reservation conflicts. Good write-up, Sander!
  • If you like the information vscsiStats gives you but don’t like the format, check out Clint Kitson’s PowerShell scripts for vscsiStats.
  • And while we’re talking vscsiStats, I would be remiss if I didn’t mention Gabe’s post on converting vscsiStats data into Excel charts.
  • Rynardt Spies has decided he’s going Hyper-V instead of VMware vSphere. OK, only in his lab, and only to learn the product a bit better. While we all agree that VMware vSphere far outstrips Hyper-V today, Rynardt’s decision is both practical and prudent. Keep blogging about your experiences with Hyper-V, Rynardt—I suspect there will be more of us reading them than perhaps anyone will admit.
  • Brent Ozar (great guy, by the way) has an enlightening post about some of the patching considerations around Azure VMs. All I can say is ouch.
  • The NIST has finally issued the final version of full virtualization security guidelines; see the VMBlog write-up for more information.
  • vCloud Connector was announced by VMware last week at Partner Exchange 2011 in Orlando. More information is available here and here.
  • Arnim van Lieshout posted an interesting article on how to configure EsxCli using PowerCLI.
  • Sander Daems gets another mention in this installation of Technology Short Takes, this time for good information on an issue with ESXi and certain BIOS revisions of the HP SmartArray 410i array controller. The fix is an upgrade to the firmware.
  • Sean Clark did some “what if” thinking in this post about the union of NUMA and vMotion to create VMs that span multiple physical servers. Pretty interesting thought, but I do have to wonder if it’s not that far off. I mean, how many people saw vMotion coming before it arrived?
  • The discussion of a separate “management cluster” has been getting some attention recently. First was Scott Drummonds, with this post and this follow up. Duncan responded here. My take? I’ll agree with Duncan’s final comment that “an architect/consultant will need to work with all the requirements and constraints”. In other words, do what is best for your situation. What’s right for one customer might not be right for the next.
  • And speaking of vShield, be sure to check out Roman Tarnavski’s post on extending vShield.
  • Interested in knowing more about how Hyper-V does CPU scheduling? Ben Armstrong is happy to help out, with Part 1 and Part 2 of CPU scheduling with Hyper-V.
  • Here’s a good write-up on how to configure Pass-Through Switching (PTS) on UCS. This is something I still haven’t had the opportunity to do myself. It kind of helps to actually have a UCS for stuff like this.

It’s time to wrap up now; I think I’ve managed to throw out a few links and articles that someone should find useful. As always, feel free to speak up in the comments below.

Tags: , , , , , , , , , ,

Welcome to Technology Short Take #9, the last Technology Short Take for 2010. In this Short Take, I have a collection of links and articles about networking, servers, storage, and virtualization. Of note this time around: some great DCI links, multi-hop FCoE finally arrives (sort of), a few XenServer/XenDesktop/XenApp links, and NTFS defragmentation in the virtualized data center. Here you go—enjoy!


  • Brad Hedlund has a great post discussing Nexus 7000 connectivity options for Cisco UCS. I’ll include it in this section since it focuses more on the networking aspect rather than UCS. I haven’t had the time to read the full PDF linked in Brad’s article, but the other topics he discusses in the post—FabricPath networks, F1 vs. M1 linecards, and FCoE connectivity—are great discussions. I’m confident the PDF is equally informative and useful.
  • This UCS-specific post describes how northbound Ethernet frame flows work. Very useful information, especially if you are new to Cisco UCS.
  • Data Center Interconnect (DCI) is a hot topic these days considering that it is a key component of long-distance vMotion (aka vMotion at distance). Ron Fuller (who I had the pleasure of meeting in person a few weeks ago, great guy), aka @ccie5851 on Twitter and one of the authors of NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures (available from Amazon), wrote a series on the various available DCI options such as EoMPLS, VPLS, A-VPLS, and OTV. If you’re considering DCI—especially if you’re a non-networking guy and need to understand the impact of DCI on the networking team—this series of articles is worth reading. Part 1 is here and part 2 is here.
  • And while we are discussing DCI, here’s a brief post by Ivan Pepelnjak about DCI encryption.
  • This post was a bit deep for me (I’m still getting up to speed on the more advanced networking topics), but it seemed interesting nevertheless. It’s a how-to on redistributing routes between VRFs.
  • Optical or twinax? That’s the question discussed by Erik Smith in this post.
  • Greg Ferro also discusses cabling in this post on cabling for 40 Gigabit and 100 Gigabit Ethernet.


  • As you probably already know, Cisco released version 1.4 of the UCS firmware. This version incorporates a number of significant new features: support for direct-connected storage, support for incorporating C-Series rack-mount servers into UCS Manager (via a Nexus 2000 series fabric extender connected to the UCS 61×0 fabric interconnects), and more. Jeremy Waldrop has a brief write-up that lists a few of his favorite new features.
  • This next post might only be of interest to partners and resellers, but having been in that space before joining EMC I fully understand the usefulness of having a list of references and case studies. In this case, it’s a list of case studies and references for Cisco UCS, courtesy of M. Sean McGee (who I hope to meet in person in St. Louis in just a couple of weeks).



  • Using XenServer and need to support multicast? Look to this article for the information on how to enable multicast with XenServer.
  • A couple of colleagues over at Intel (I worked with Brian on one of his earlier white papers) forwarded me the link to their latest Ethernet virtualization white paper, which discusses the use of 10 Gigabit Ethernet with VMware vSphere. You can find the link to the latest paper in this blog entry.
  • Bhumik Patel has a good write-up on the “behind-the-scenes” technical details that went into the Cisco-Citrix design guides around XenDesktop/XenApp on Cisco UCS. Bhumik provides the details on things like how many blades were using in the testing, what the configuration of the blades was, and what sort of testing was performed.
  • Thinking of carving your storage up into guest OS datastores for VMware? You might want to read this first for some additional considerations.
  • I know that this has seen some traffic already, but I did want to point out Eric Sloof’s post on the Xenoss XenPack for ESXTOP. I haven’t had the opportunity to use it yet, but would certainly love to hear from anyone who has. Feel free to share your experiences in the comments.
  • As is usually the case, Duncan Epping has had some great posts over the last few weeks. His post on shares set on resource pools highlights the need to adjust the shares value (and other resource constraints) based on the contents of the pool, something that many people forget to do. He also provides a breakdown of the various vCenter memory statistics, and discusses an issue with binding a Provider vDC directly to an ESX/ESXi host.
  • PowerCLI 4.1.1 has some improvements for VMware HA clusters which are detailed in this VMware vSphere PowerCLI Blog entry.
  • Frank Denneman has three articles which have caught my attention over the last few weeks. (All his stuff is good, by the way.) First is his two-part series on the impact of oversized virtual machines (part 1 and part 2). Some of the impacts Frank discusses include memory overhead, NUMA architectures, shares values, HA slot size, and DRS initial placement. Apparently a part 3 is planned but hasn’t been published yet (see some of the comments in part 2). Also worth a read is Frank’s recent post on node interleaving.
  • Here’s yet another tool in your toolkit to help with the transition to ESXi: a post by Gabe on setting logfile location, swap file, SNMP, and vmkcore partition in ESXi.
  • Here’s another guide to creating a bootable ESXi USB stick (on Windows). Here’s my guide to doing it on Mac OS X.
  • Jon Owings had an idea about dynamic cluster pooling. This is a pretty cool idea—perhaps we can get VMware to include it in the next major release of vSphere?
  • Irritated that VMware disabled copy-and-paste between the VM and the vSphere Client in vSphere 4.1? Fix it with these instructions.
  • This white paper on configuration examples and troubleshooting for VMDirectPath was recently released by VMware. I haven’t had the chance to read it yet, but it’s on my “to read” list. I’ll just have a look at that in my copious free time…
  • David Marshall has posted on VMblog.com a two-part series on how NTFS causes I/O bottlenecks on virtual machines (part 1 and part 2). It’s a great review of NTFS and how Microsoft’s file system works. Ultimately, the author of the posts (Robert Nolan) sets the readers up for the need for NTFS defragmentation in order to reduce the I/O load on virtualized infrastructures. While I do agree with Mr. Nolan’s findings in that regard, there are other considerations that you’ll also want to include. What impact will defragmentation have on your storage array? For example, I think that NetApp doesn’t recommend using defragmentation in conjunction with their storage arrays (I could be wrong; can anyone confirm?). So, I guess my advice would be to do your homework, see how defragmentation is going to affect the rest of your environment, and then proceed from there.
  • Microsoft thinks that App-V should be the most important tool in your virtualization tool belt. Do you agree or disagree?
  • William Lam has instructions for how to identify the origin of a vSphere login. This might not be something you need to do on a regular basis, but when you do need to do it you’ll be thankful you have the instructions how.

I guess it’s time to wrap up now, since I have likely overwhelmed you with a panoply of data center-related tidbits. As always, I encourage your feedback, so please feel free to speak up in the comments. Thanks for reading!

Tags: , , , , , , , , , , ,

Welcome to Technology Short Take #8, my latest collection of item from around the Internet pertaining to data center technologies such as virtualization, networking, storage, and servers. I hope you find something useful here!


  • A couple weeks ago I wrote about how to connect a Nexus 2148T to a Nexus 5010. What I haven’t actually done yet is connect the Nexus 2148T to dual Nexus 5010 upstream switches using virtual port-channels. This post offers a configuration for just such an arrangement (just be sure to read the follow-up post for a key command to include in the configuration).
  • And speaking of fabric extenders, Brad Hedlund recently posted a great article explaining QoS on the Cisco UCS fabric extender.
  • Erik Smith has posted part 2 of his series on FIP, FIP snooping bridges, and FCFs.
  • This post by Ethan Banks on the scaling limitations of EtherChannel reinforces what I’ve said before about the use of EtherChannel and VMware ESX/ESXi (see this post and this follow-up post). Apparently this misunderstanding about how link aggregation works isn’t just limited to virtualization folks.
  • This article on BPDU Guard and BPDU Filter is a bit deep for the non-expert networkers (like me), but is good reading nevertheless. Since it is recommended that you disable Spanning Tree Protocol for ports facing VMware ESX/ESXi hosts (and here’s the explanation why from our good friend Ivan “IOSHints” Pepelnjak), it’s good to understand which commands to use and at which scope to use them.
  • While link aggregation isn’t a panacea, it can be helpful in a number of situations. However, you really need to consider multi-chassis link aggregation in order to eliminate single points of failure. Ivan Pepelnjak posted a series of articles on multi-chassis link aggregation (MLAG): MLAG basics, MLAG via stacking, and MLAG external brains. I think there’s another post still coming, correct Ivan?
  • This post is written by networking guru Jeremy Gaddis, but it’s really more about UNIX and regular expressions. So in which category does it belong? (Does it really matter?) I guess since it deals with how to use UNIX tools and regular expressions to manipulate networking data I’ll put it in the networking category.
  • Here’s a good post on creating LAN-to-LAN tunnels between a Vyatta router and a Cisco ASA.
  • Joe Onisick has a good write-up on the behavior of inter-fabric traffic in UCS.



  • It’s good to see more “virtualization” people getting into storage. Recently, Jon Owings (of “2vcps and a Truck” fame) posted two good articles on storage caching vs. tiering (part 1 and part 2). Jon provides a good vehicle to discuss these technologies, both of which are valuable and both of which have a place in many companies’ storage strategy.
  • Here’s another view on storage tiering vs. caching.
  • And speaking yet again of caching and tiering…here’s some info on FAST (tiering) and FAST Cache (caching).
  • Finally, rounding out this week’s caching and tiering discussions, I have Nigel Poulton’s post on sub-LUN tiering design considerations.


  • Noted virtualization performance guru Scott Drummonds posted a collection of thoughts regarding maximum hosts per cluster. Duncan Epping—who along with Frank Denneman recently published their HA and DRS Technical Deep Dive book—followed up with his own post on the matter. Both Scott and Duncan summarized some of the advantages and disadvantages of both large and small clusters. The real key to understanding the “right” cluster size is, as Duncan points out, an understanding of how best to meet the requirements of the environment.
  • Scott also posted this useful discussion on considerations around VMDK consolidation in virtualized environments.
  • Need to find the IP address of a VM from an ESX host? Try this article, which provides the steps and tools you need.
  • I saw a couple of good articles from Frank Denneman recently. One was on how to disallow multiple VM console sessions; the second was discussing disabling the balloon driver and why it’s generally not a good idea. Frank also posted a piece recently on the interaction between QoS and the adaptive algorithm that DRS uses to determine concurrent vMotions. While I agree that you don’t want to impact DRS’ ability to perform concurrent vMotions to restore cluster balance, it’s also critically important to ensure proper balance in network traffic. You don’t want a single type of network traffic to swamp others. As with many other things in the virtualization arena, just be sure to understand the impact of your decisions.
  • If you need to shape network traffic (like vMotion), here’s a how to document on how it’s done.
  • This article underscores the difficulty that many organizations have in adopting “the cloud”, at least as it pertains to public cloud-based services. Perhaps this is why VMware is so keen on embracing software development tools that help build cloud-enabled applications…
  • I think that esxtop remains an underestimated tool in troubleshooting. If you need to deepen your understanding of this very useful tool, a good place to start—among the many, many resources that are available—would be this presentation. Thanks for posting this, Alan!
  • William Lam has posted a thorough guide to using vi-fastpass with fpauth and adauth. As the vMA grows in importance with the transition to ESXi in the next major release of vSphere, tips and tricks such as this are going to be worth their words in gold. Good job William!
  • Remember how I’ve mentioned before that it’s funny when vendors taking existing technologies and rename them to include the word “virtualization”? That’s the feeling I get from reading this article on Microsoft’s user state virtualization.

It’s time to wrap up now, so I’ll just say that I welcome any feedback, corrections, clarifications, or additions in the comments below. I hope you’ve found something helpful. Thanks!

Tags: , , , , , , , ,

Welcome to Technology Short Take #2, a collection of links, thoughts, ideas, and items pertaining to data center technologies—virtualization, networking, storage, and security. I hope you find something useful or interesting!

  • The release of FLARE 30 and DART 6 by EMC (formally announced last week) introduces some new concepts and new functionality. Matt Hensley recently did a write-up on some of the new functionality in this post on virtual provisioning, storage pools, and FLARE 30. It’s worth a read if you aren’t already familiar with these technologies and need a primer.
  • If you are looking for the definitive guide on connectivity between various VMware vSphere components and the TCP/UDP ports required, you need only look here. Great information!
  • Here’s a great guide from Cisco on deployment options when deploying 10 Gigabit Ethernet on VMware vSphere 4.0 with the Nexus 1000V or the VMware vNetwork Distributed Switch. I’ve read through it, but I’ve added it to my list of documents to go back and study more carefully; there’s lots of useful information in here.
  • Way back in March Dave Convery posted this article on limitations with VMware vShield Zones. While re-reading that article today, I noted in the comments that the Nexus 1000V has a feature called Virtual Service Domains that help address some of the limitations of vShield Zones (at that time). As pointed out in the comments, this makes vShield Zones usable in two NIC scenarios such as with Cisco UCS. If anyone has any additional links on Virtual Service Domains, please share them in the comments. This is a topic that I think needs some additional attention.
  • This article is a good breakdown of the differences in storage identifiers between ESX 3.x and ESX 4.1.
  • Jeff Woolsey at Microsoft finally wraps up his series of articles on Hyper-V Dynamic Memory with Part 6. I’ve been reading this series pretty faithfully as Jeff systematically lays out the various ways in which memory is handled in a virtualization scenario, and I’ve been consistently struck by the impression that Jeff was working really hard to distinguish what Microsoft was doing with Hyper-V from what VMware does with ESX/ESXi. In the end, though, I can’t help but see all the similarities between the two. Dynamic Memory allocates additional memory to a VM as it needs it (much the same way ESX/ESXi does by allocating memory only as requested by the VM) and reclaims free pages from the VMs (just like ESX/ESXi reclaims idle pages via idle page reclamation). When under memory pressure, Hyper-V might force the guests to page out to disk; ESX/ESXi’s memory balloon driver achieves the same effect. What’s missing, obviously, is that with Hyper-V the hypervisor itself won’t swap pages out to disk (ESX/ESXi will do this under extreme circumstances). Am I missing something, or is Microsoft’s Dynamic Memory a lot more like VMware’s memory management technologies than Microsoft wants to admit? Feel free to enlighten me (courteously and with full disclosure) in the comments if I’m missing something.
  • Via Geert Verbist’s site, I found this article on application consistent quiescing via VMware’s VSS integration in VMware Tools. (For more information on VSS support within VMware Tools, check out my liveblog from Partner Exchange earlier this year.) This is good to hear, but what’s still not clear is whether the application consistent snapshots will truncate transaction logs. If anyone has more information, speak up in the comments.
  • I think I pointed this out a week or two ago on Twitter, but I thought I’d mention here at well. If you ever need to help decode which WWPNs map to which ports on an EMC CLARiiON array, this article is quite helpful. Anyone have matching articles for EMC Symmetrix, NetApp, HP, HDS, or other arrays?
  • With the formal announcement by VMware that vSphere 4.1 will be the last major release that includes ESX, ESXi is naturally getting much more attention. With that, there’s been a flurry of ESXi-related articles:
    Using vMA As Your ESXi Syslog Server
    The Migration From ESX to ESXi is Happening: Moving Configurations, Part 1
    The Migration from ESX to ESXi is Happening: Moving Configurations, Part II
    My VMware ESXi Installation Checklist
    Virtually Ghetto: ESXi 4.1 – Major Security Issue (also documented here in the VMware KB)
    ESXi 4.1 – Major Security Issue – The Sequel and the Workaround
    ESXi 4.1 Active Directory Integration
  • If you’re into Cisco UCS but like Hyper-V instead of VMware vSphere, Cisco has a white paper on Cisco UCS with Hyper-V for delivery of virtualized Exchange 2010.
  • I’m a command-line junkie, so I liked this article on how to put an ESX host into maintenance mode from the CLI.
  • For those seeking to get up to speed on the Nexus 7000 switches, “Fryguy” posted some training documents on his site. I haven’t read them (yet), but they’re on my list of documents to read (a list that grows ever longer…)

I guess that will do it for this time around. I hope that you’ve found something useful and, as always, feel free to add more useful links or tidbits in the comments. Thanks for reading!

Tags: , , , , , , , , ,

« Older entries