UCS

You are currently browsing articles tagged UCS.

Welcome to Technology Short Take #42, another installation in my ongoing series of irregularly published collections of news, items, thoughts, rants, raves, and tidbits from around the Internet, with a focus on data center-related technologies. Here’s hoping you find something useful!

Networking

  • Anthony Burke’s series on VMware NSX continues with part 5.
  • Aaron Rosen, a Neutron contributor, recently published a post about a Neutron extension called Allowed-Address-Pairs and how you can use it to create high availability instances using VRRP (via keepalived). Very cool stuff, in my opinion.
  • Bob McCouch has a post over at Network Computing (where I’ve recently started blogging as well—see my first post) discussing his view on how software-defined networking (SDN) will trickle down to small and mid-sized businesses. He makes comparisons among server virtualization, 10 Gigabit Ethernet, and SDN, and feels that in order for SDN to really hit this market it needs to be “not a user-facing feature, but rather a means to an end” (his words). I tend to agree—focusing on SDN is focusing on the mechanism, rather than focusing on the problems the mechanism can address.
  • Want or need to use multiple external networks in your OpenStack deployment? Lars Kellogg-Stedman shows you how in this post on multiple external networks with a single L3 agent.

Servers/Hardware

  • There was some noise this past week about Cisco UCS moving into the top x86 blade server spot for North America in Q1 2014. Kevin Houston takes a moment to explore some ideas why Cisco was so successful in this post. I agree that Cisco had some innovative ideas in UCS—integrated management and server profiles come to mind—but my biggest beef with UCS right now is that it is still primarily a north/south (server-to-client) architecture in a world where east/west (server-to-server) traffic is becoming increasingly critical. Can UCS hold on in the face of a fundamental shift like that? I don’t know.

Security

  • Need to scramble some data on a block device? Check out this command. (I love the commandlinefu.com site. It reminds me that I still have so much yet to learn.)

Cloud Computing/Cloud Management

  • Want to play around with OpenDaylight and OpenStack? Brent Salisbury has a write-up on how to OpenStack Icehouse (via DevStack) together with OpenDaylight.
  • Puppet Labs has released a module that allows users to programmatically (via Puppet) provision and configure Google Compute Platform (GCP) instances. More details are available in the Puppet Labs blog post.
  • I love how developers come up with these themes around certain projects. Case in point: “Heat” is the name of the project for orchestrating resources in OpenStack, HOT is the name for the format of Heat templates, and Flame is the name of a new project to automatically generate Heat templates.

Operating Systems/Applications

  • I can’t imagine that anyone has been immune to the onslaught of information on Docker, but here’s an article that might be helpful if you’re still looking for a quick and practical introduction.
  • Many of you are probably familiar with Razor, the project that former co-workers Nick Weaver and Tom McSweeney created when they were at EMC. Tom has since moved on to CSC (via the vCHS team at VMware) and has launched a “next-generation” version of Razor called Hanlon. Read more about Hanlon and why this is a new/separate project in Tom’s blog post here.
  • Looking for a bit of clarity around CoreOS and Project Atomic? I found this post by Major Hayden to be extremely helpful and informative. Both of these projects are on my radar, though I’ll probably focus on CoreOS first as the (currently) more mature solution.
  • Linux Journal has a nice multi-page write-up on Docker containers that might be useful if you are still looking to understand Docker’s basic building blocks.
  • I really enjoyed Donnie Berkholz’ piece on microservices and the migrating Unix philosophy. It was a great view into how composability can (and does) shift over time. Good stuff, I highly recommend reading it.
  • cURL is an incredibly useful utility, especially in today’s age of HTTP-based REST API. Here’s a list of 9 uses for cURL that are worth knowing. This article on testing REST APIs with cURL is handy, too.
  • And for something entirely different…I know that folks love to beat up AppleScript, but it’s cross-application tasks like this that make it useful.

Storage

  • Someone recently brought the open source Open vStorage project to my attention. Open vStorage compares itself to VMware VSAN, but supporting multiple storage backends and supporting multiple hypervisors. Like a lot of other solutions, it’s implemented as a VM that presents NFS back to the hypervisors. If anyone out there has used it, I’d love to hear your feedback.
  • Erik Smith at EMC has published a series of articles on “virtual storage networks.” There’s some interesting content there—I haven’t finished reading all of the posts yet, as I want to be sure to take the time to digest them properly. If you’re interested, I suggest starting out with his introductory post (which, strangely enough, wasn’t the first post in the series), then moving on to part 1, part 2, and part 3.

Virtualization

  • Did you happen to see this write-up on migrating a VMware Fusion VM to VMware’s vCloud Hybrid Service? For now—I believe there are game-changing technologies out there that will alter this landscape—one of the very tangible benefits of vCHS is its strong interoperability with your existing vSphere (and Fusion!) workloads.
  • Need a listing of the IP addresses in use by the VMs on a given Hyper-V host? Ben Armstrong shares a bit of PowerShell code that produces just such a listing. As Ben points out, this can be pretty handy when you’re trying to track down a particular VM.
  • vCenter Log Insight 2.0 was recently announced; Vladan Seget has a decent write-up. I’m thinking of putting this into my home lab soon for gathering event information from VMware NSX, OpenStack, and the underlying hypervisors. I just need more than 24 hours in a day…
  • William Lam has an article on lldpnetmap, a little-known utility for mapping ESXi interfaces to physical switches. As the name implies, this relies on LLDP, so switches that don’t support LLDP or that don’t have LLDP enabled won’t work correctly. Still, a useful utility to have in your toolbox.
  • Technology previews of the next versions of Fusion (Fusion 7) and Workstation (Workstation 11) are available; see Eric Sloof’s articles (here and here for Fusion and Workstation, respectively) for more details.
  • vSphere 4 (and associated pieces) are no longer under general support. Sad face, but time stops for no man (or product).
  • Having some problems with VMware Fusion’s networking? Cody Bunch channels his inner Chuck Norris to kick VMware Fusion networking in the teeth.
  • Want to preview OS X Yosemite? Check out William Lam’s guide to using Fusion or vSphere to preview the new OS X beta release.

I’d better wrap this up now, or it’s going to turn into one of Chad’s posts. (Just kidding, Chad!) Thanks for taking the time to read this far!

Tags: , , , , , , , , , , , , , , ,

Reader Brian Markussen—with whom I had the pleasure to speak at the Danish VMUG in Copenhagen earlier this month—brought to my attention an issue between VMware vSphere’s health check feature and Cisco UCS when using Cisco’s VIC cards. His findings, confirmed by VMware support and documented in this KB article, show that the health check feature doesn’t work properly with Cisco UCS and the VIC cards.

Here’s a quote from the KB article:

The distributed switch network health check, including the VLAN, MTU, and teaming policy check can not function properly when there are hardware virtual NICs on the server platform. Examples of this include but are not limited to Broadcom Flex10 systems and Cisco UCS systems.

(Ignore the fact that “UCS systems” is redundant.)

According to Brian, a fix for this issue will be available in a future update to vSphere. In the meantime, there doesn’t appear to be any workaround, so plan accordingly.

Tags: , , , , , ,

Welcome to Technology Short Take #29! This is another installation in my irregularly-published series of links, thoughts, rants, and raves across various data center-related fields of technology. As always, I hope you find something useful here.

Networking

  • Who out there has played around with Mininet yet? Looks like this is another tool I need to add to my toolbox as I continue to explore networking technologies like OpenFlow, Open vSwitch, and others.
  • William Lam has a recent post on some useful VXLAN commands found in ESXCLI with vSphere 5.1. I’m a CLI fan, so I like this sort of stuff.
  • I still have a lot to learn about OpenFlow and networking, but this article from June of last year (it appears to have been written by Ivan Pepelnjak) discusses some of the potential scalability concerns around early versions of the OpenFlow protocol. In particular, the use of OpenFlow to perform granular per-flow control when there are thousands (or maybe only hundreds) of flows presents a scalability challenge (for now, at least). In my mind, this isn’t an indictment of OpenFlow, but rather an indictment of the way that OpenFlow is being used. I think that’s the point Ivan tried to make as well—it’s the architecture and how OpenFlow is used that makes a difference. (Is that a reasonable summary, Ivan?)
  • Brad Hedlund (who will be my co-worker starting on 2/11) created a great explanation of network virtualization that clearly breaks down the components and explains their purpose and function. Great job, Brad.
  • One of the things I like about Open vSwitch (OVS) is that it is so incredibly versatile. Case in point: here’s a post on using OVS to connect LXC containers running on different hosts via GRE tunnels. Handy!

Servers/Hardware

  • Cisco UCS is pretty cool in that it makes automation of compute hardware easier through such abstractions as server profiles. Now, you can also automate UCS with Chef. I traded a few tweets with some Puppet folks, and they indicated they’re looking at this as well.
  • Speaking of Puppet and hardware, I also saw a mention on Twitter about a Puppet module that will manage the configuration of a NetApp filer. Does anyone have a URL with more information on that?
  • Continuing the thread on configuration management systems running on non-compute hardware (I suppose this shouldn’t be under the “Servers/Hardware” section any longer!), I also found references to running CFEngine on network apliances and running Chef on Arista switches. That’s kind of cool. What kind of coolness would result from even greater integration between an SDN controller and a declarative configuration management tool? Hmmm…

Security

  • Want full-disk encryption in Ubuntu, using AES-XTS-PLAIN64? Here’s a detailed write-up on how to do it.
  • In posts and talks I’ve given about personal productivity, I’ve spoken about the need to minimize “friction,” that unspoken drag that makes certain tasks or workflows more difficult and harder to adopt. Tal Klein has a great post on how friction comes into play with security as well.

Cloud Computing/Cloud Management

  • If you, like me, are constantly on the search for more quality information on OpenStack and its components, then you’ll probably find this post on getting Cinder up and running to be helpful. (I did, at least.)
  • Mirantis—recently the recipient of $10 million in funding from various sources—posted a write-up in late November 2012 on troubleshooting some DNS and DHCP service configuration issues in OpenStack Nova. The post is a bit specific to work Mirantis did in integrating an InfoBlox appliance into OpenStack, but might be useful in other situation as well.
  • I found this article on Packstack, a tool used to transform Fedora 17/18, CentOS 6, or RHEL 6 servers into a working OpenStack deployment (Folsom). It seems to me that lots of people understand that getting an OpenStack cloud up and running is a bit more difficult than it should be, and are therefore focusing efforts on making it easier.
  • DevStack is another proof point of the effort going into make it easier to get OpenStack up and running, although the focus for DevStack is on single-host development environments (typically virtual themselves). Here’s one write-up on DevStack; here’s another one by Cody Bunch, and yet another one by the inimitable Brent Salisbury.

Operating Systems/Applications

  • If you’re interested in learning Puppet, there are a great many resources out there; in fact, I’ve already mentioned many of them in previous posts. I recently came across these Example42 Puppet Tutorials. I haven’t had the chance to review them myself yet, but it looks like they might be a useful resource as well.
  • Speaking of Puppet, the puppet-lint tool is very handy for ensuring that your Puppet manifest syntax is correct and follows the style guidelines. The tool has recently been updated to help fix issues as well. Read here for more information.

Storage

  • Greg Schulz (aka StorageIO) has a couple of VMware storage tips posts you might find useful reading. Part 1 is here, part 2 is here. Enjoy!
  • Amar Kapadia suggests that adding LTFS to Swift might create an offering that could give AWS Glacier a real run for the money.
  • Gluster interests me. Perhaps it shouldn’t, but it does. For example, the idea of hosting VMs on Gluster (similar to the setup described here) seems quite interesting, and the work being done to integrate KVM/QEMU with Gluster also looks promising. If I can ever get my home lab into the right shape, I’m going to do some testing with this. Anyone done anything with Gluster?
  • Erik Smith has a very informative write-up on why FIP snooping is important when using FCoE.
  • Via this post on ten useful OpenStack Swift features, I found this page on how to build the “Swift All in One,” a useful VM for learning all about Swift.

Virtualization

  • There’s no GUI for it, but it’s kind of cool that you can indeed create VM anti-affinity rules in Hyper-V using PowerShell. This is another example of how Hyper-V continues to get more competent. Ignore Microsoft and Hyper-V at your own risk…
  • Frank Denneman takes a quick look at using user-defined NetIOC network resource pools to isolate and protect IP-based storage traffic from within the guest (i.e., using NFS or iSCSI from within the guest OS, not through the VMkernel). Naturally, this technique could be used to “protect” or “enhance” other types of important traffic flows to/from your guest OS instances as well.
  • Andre Leibovici has a brief write-up on the PowerShell module for the Nicira Network Virtualization Platform (NVP). Interesting stuff…
  • This write-up by Falko Timme on using BoxGrinder to create virtual appliances for KVM was interesting. I might have to take a look at BoxGrinder and see what it’s all about.
  • In case you hadn’t heard, OVF 2.0 has been announced/released by the DMTF. Winston Bumpus of VMware’s Office of the CTO has more information in this post. I also found the OVF 2.0 frequently asked questions (FAQs) to be helpful. Of course, the real question is how long it will be before vendors add support for OVF 2.0, and how extensive that support actually is.

And that’s it for this time around! Feel free to share your thoughts, suggestions, clarifications, or corrections in the comments below. I encourage your feedback, and thanks for reading.

Tags: , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #18! I hope you find something useful in this collection of networking, OS, storage, and virtualization links. Enjoy!

Networking

The number of articles in my “Networking” bucket continues to overflow; I have so many articles on so many topics (soft switching, OpenFlow, Open vSwitch, MPLS) that it’s hard to get my head wrapped around all of it. Here are a few posts that stuck out to me:

  • Ivan Pepelnjak has a very well-written post explaining the various ways that virtual networking can be decoupled from the physical network.
  • I stumbled across a trio of articles by Denton Gentry on hash tables (part 1, part 2, and part 3). This is an interesting perspective I hadn’t considered before; as we move more into software-defined networks (SDNs), why are we continuing to use the same mechanisms as before? Why not take advantage of more efficient mechanisms as part of this transition?

Servers/Operating Systems

  • Nigel Poulton and I traded a few tweets during HP Discover Vienna about SCSI Express (or SCSI over PCIe, SoP). He wrote up his thoughts about SoP and its future in the storage industry here. Further Twitter-based discussions about fabrics led him to say that HP buying Xsigo would bring the competition back against UCS. I’m not so sure I agree. Xsigo’s server fabric technology/product is interesting, but it seems to me that it’s still adding layers of abstraction that aren’t necessary. As SR-IOV, MR-IOV, and PCIe extension matures, it seems to me that Ethernet as the fabric is going to win. If that’s the case, and HP wants to bring the hurt against UCS, they’re going to have to invest in Ethernet-based fabrics.
  • Speaking of UCS, here’s a “how to” on deploying the UCS Platform Emulator on vSphere. You might also like the UCS PE configuration follow-up post.
  • Here’s what looks to be a handy Mac OS X utility to track how long until your Active Directory password expires. Sounds simple, yes, but useful.

Storage

Virtualization

  • Jason Boche, after some collaboration with Bob Plankers, wrote up a good procedure for expanding the vCloud Director Transfer Server storage space. It’s definitely worth a read if you’re going to be working with vCloud Director.
  • Microsoft has released version 3.2 of the Linux Integration Services for Hyper-V. The new release adds integrated mouse support, updated network drivers, and fixes an issue with SCVMM compatibility.
  • Julian Wood, who I had the opportunity to meet in Copenhagen at VMworld 2011, has published a four-part series on managing vSphere 5 certificates. Follow these links for the series: part 1, part 2, part 3, and part 4.
  • Thinking of deploying Oracle on vSphere? You should probably read this three-part series from VMware’s Business Critical Applications blog: part 1 is here, part 2 is here, and part 3 is here.
  • I’m so used to dealing with VLANs in a vSphere environment, I didn’t consider the challenges that might come up when using them with VMware Workstation. Fortunately, this author did—read his post on mapping VLANs to VMnets in VMware Workstation.
  • I thought that this article on virtual disks with business critical applications would be a deep dive on which virtual disk formats (thin, lazy zeroed, eager zeroed) are best suited for various applications. While the article does discuss the different virtual disk formats, unfortunately that’s as far as it goes.
  • Fellow VMware vSphere Design co-author Forbes Guthrie highlights an important design concern with AutoDeploy: what about a virtual vCenter instance? Read his full article for the in-depth discussion.
  • This post by William Lam gives a good overview of when vSphere MoRefs change (or don’t change).
  • Here’s a good explanation why NIC teaming can’t be used with iSCSI binding.
  • Cormac Hogan also posted a nice overview of some new vmkfstools enhancements in vSphere 5.
  • Terence Luk posts a detailed procedure to help recover VMware Site Recovery Manager in the event of a failure of one of the SRM servers. Good information—thanks Terence!

And that’s it for this time around. Feel free to add your thoughts in the comments below—all comments are welcome! (Please provide full disclosure of vendor affiliations/employment where applicable. Thanks!)

Tags: , , , , , , , ,

This is BRKCOM-3002, Network Redundancy and Load Balancing Designs for UCS Blade Servers, presented by none other than M. Sean McGee (available as @mseanmcgee on Twitter; I highly recommend you follow him on Twitter if you are interested in UCS). I was really looking forward to this presentation, as I’ve spoken with Sean before and I know that he is a super-sharp UCS guru. Sean also blogs here.

Sean starts out the presentation by asking, “Do you know how UCS networking thinks?” This sets the stage for how he is going to explain how frames move through UCS, both northbound and southbound. So part of his goal today is to teach us how UCS networking thinks.

A quick outline of the presentation is:

  • A review of components (fabric interconnects and extenders, converged network adapters)
  • A look at network path control mechanisms

Sean spends a few minutes reviewing the high-level architecture of the UCS and some of the differentiation between UCS and other typical blade architectures. He reminds the attendees that the fabric interconnects are more than just switches; they are the management for the entire system (the fabric interconnects are where UCS Manager runs). UCS Manager offers both high-level views of UCS as well as very detailed views of UCS. Sean also spends a bit of time reviewing the idea of service profiles as an encapsulation of the server’s identity and the central role that service profiles play in a UCS environment.

Over the next few minutes Sean discusses the trade-offs that come with blade servers (decreased server visibility and increased points of management) and how that plays into Cisco’s approach for UCS. The use of fabric extenders is key to eliminating some of the visibility loss and reducing points of management. This leads into an extended discussion of the fabric interconnects and fabric extenders and the relationship between them.

Next, a brief overview of the CNAs available for the UCS leads to a discussion of the VIC (Palo) and the use of Cisco’s FEX technology in virtualized environments. Here are the different FEX “types”:

  • Rack FEX: This is a typical N5K/N2K sort of deployment.
  • Chassis FEX: This is what UCS uses: UCS 61xx plus fabric extenders in the blade chassis.
  • Adapter FEX: This is VIC (Palo), creating virtual interfaces all the way into an operating system (any OS for which Cisco provides VIC/Palo drivers).
  • VM-FEX: This is the use of VIC (Palo) plus a management link between UCS and vCenter Server to connect VMs directly to the VIC, essentially “bypassing” the hypervisor. (As a side note: this has numerous design considerations that you have to consider with regards to vSphere.)

Now Sean opens up some new stuff…

A new fabric interconnect is in the works; the Cisco 6248UP fabric interconnect. This has more ports and supports Unified Ports. The Unified Port support is very cool; that allows any of the 48 ports to be either Ethernet or Fibre Channel.

A new fabric extender is also being released; the UCS 2208XP fabric extender (goes in the chassis). The 2208XP fabric extender will offer 8 uplinks to the fabric interconnects (2x the uplinks of the 1st gen UCS fabric extender). The new fabric extender also offers more downlinks to individual servers (4x the downlinks versus previous generations). You can also use port-channeling between the fabric extender and the fabric interconnects.

Cisco is also releasing the VIC 1280, a new generation of the VIC (Palo) adapter. This offers dual 4x 10GbE uplinks for up to 80Gbps per host. The VIC 1280 can also present up to 116 interfaces to the OS (or hypervisor). The VIC 1280 can also use port-channeling up to the fabric extender, which was not supported in previous generations.

All of these components are fully interoperable with previous generation versions of the components.

Sean now switches gears to focus on the behaviors of UCS networking and “how UCS networking thinks.”

The first topic is the network path control mechanisms. There are several mechanisms in place, depending on “where” in the networking stack they are applicable. For example, at the “edge” of UCS there are mechanisms like Spanning Tree Protocol versus border port pinning (affected by End Host Mode or Switch Mode on the fabric interconnects) only operates at the fabric interconnect level.

Sean spends a few minutes explaining the differences between a traditional switch and a vSwitch. For example, vSwitches don’t do MAC learning and don’t run STP; traditional switches don’t receive broadcasts only on one port and don’t use host pinning for load balancing.

Sean then uses that discussion to frame the difference between UCS fabric interconnects in Switch Mode (acting like a traditional switch) versus End Host Mode (acting like a vSwitch). End Host Mode makes the fabric interconnects “look” and “act” like a vSwitch. You can use active/active uplinks without using port channeling and the fabric interconnect uses pinning to attach servers to uplinks. Overall, it was very good comparison.

Border port pinning is what UCS uses to load balance traffic from servers onto the fabric interconnect uplinks. Individual blades are either dynamically pinned to an uplink or statically (manually) pinned to an uplink. Traffic on the same VLAN and the same fabric interconnect is locally switched. All uplinks are active (because of End Host Mode) and there’s no concern for loops and no need for port channeling. You can use port channeling for uplinks, and those port channels can be used with either dynamic or static pinning.

Some best practices for pinning:

  • Use dynamic pinning unless there is a specific reason to use static pinning.
  • Use port channels where possible for improved redundancy.

UCS also offers fabric port pinning (the way in which the fabric extender assigns uplinks). There are two modes: discrete mode and port channel mode. In discrete mode, you can use Fabric Failover (more info later) to assign a primary fabric path and a backup fabric path. This would allow you to modify the assignment of a server to a fabric extender uplink. You could also use NIC teaming in the OS (or hypervisor) to do the same thing and direct traffic to one port versus another port.

Port channel mode (only supported on the Gen2 components) changes all that behavior; the uplinks between the fabric extenders and the fabirc interconnects can be used by any server in the chassis. This is a nice improvement, in my opinion. Port channel mode is selected in the Chassis Discovery Policy in UCS Manager.

Port channel mode versus discrete mode is selected on a per-chassis basis; you can have different chassis with different settings. This offers some flexibility for customers who might want more specific control over traffic placement.

The Gen2 CNA (the VIC 1280) and the Gen2 fabric extender (UCS 2208XP) can also use port channeling between them (this is new).

Sean then walked the attendees though various combinations of Gen1 and Gen2 hardware with both discrete mode and port channel mode. His recommended practice (assuming you have all Gen2 hardware) is to use port channel mode between both the VIC 1280 and the fabric extender and between the fabric extender and the fabric interconnect.

Next up is a discussion of fabric failover and how it behaves and acts. For customers that are concerned about only a single CNA, you can deploy full-width blades with dual CNAs. Naturally, you configure fabric failover within a service profile. Only the Gen1 Menlo cards and Palo cards support fabric failover. Note that you can use fabric failover in conjunction with VM-FEX (pushing virtual interfaces all the way to VMs) to help control which VMs communicate over which fabric.

OS NIC teaming is also an option for directing/controlling traffic within a UCS environment. As with all other things, there are advantages and disadvantages of both approaches, so plan carefully.

Sean wraps up the session with a review of the decision process UCS follows to drive a frame northbound out of UCS, and then the same decision process when forwarding frames southbound.

All in all, this was an excellent session with lots of useful information. A fair amount of it was review for me, but still useful to attend nevertheless.

Tags: , , ,

Now that I’ve published the Storage Edition of Technology Short Take #12, it’s time for the Networking Edition. Enjoy, and I hope you find something useful!

  • Ron Fuller’s ongoing deep dive series on OTV (Overlay Transport Virtualization) has been great for me. I knew about the basics of OTV, but Ron’s articles really gave me a better understanding of the technology. Check out the first three articles here: part 1, part 2, and part 3.
  • Similarly, Joe Onisick’s two-part (so far) series on inter-fabric traffic on Cisco UCS is very helpful and informative as well. There are definitely some design considerations that come about from deploying VMware vSphere on Cisco UCS. Have a look at Joe’s articles on his site (Part 1 and Part 2).
  • Kurt Bales’ article on innovation vs. standardization is a great read. The key, in my mind, is innovating (releasing “non-standard” stuff) while also working with the broader community to help encourage standardization around that innovation.
  • Here’s another great multi-part series, this time from Brian Feeny on NX-OS (part 1 here, and part 2 here). Brian exposes some pretty interesting stuff in the NX-OS kickstart and system image.
  • I’ve discussed LISP a little bit here and there, but Greg Ferro reminds us that LISP isn’t a “done deal.”
  • J Metz wrote a good article on the interaction (or lack thereof, depending on how you look at it) between FCoE and TRILL.
  • For a non-networking geek like me, some great resources to become more familiar with TRILL might include this comparison of 802.1aq and TRILL, this explanation from RFC 5556, this discussion of TRILL-STP integration, or this explanation using north-south/east-west terminology. Brad Hedlund’s TRILL write-up from a year ago is also helpful, in my opinion. All of these are great resources, in my mind.
  • And as if understanding TRILL, or the differences between TRILL and FabricPath weren’t enough (see this discussion by Ron Fuller on the topic), then we have 802.1aq Shortest Path Bridging (SPB) thrown in for good measure, too. If it’s hard for networking experts to keep up with all these developments, think about the non-networking folks like me!
  • Ivan Pepelnjak’s examination of vCDNI-based private networks via Wireshark traces exposes some notable scalability limitations. It makes me wonder, as Ivan does, why VMware chose to use this method versus something more widely used and well-proven, like MPLS? And isn’t there an existing standard for MAC-in-MAC encapsulation? Why didn’t VMware use that existing standard? Perhaps it goes back to innovation vs. standardization again?
  • If you’re interested in more details on vCDNI networks, check out this post by Kamau Wanguhu.
  • Omar Sultan of Cisco has a quick post on OpenFlow and Cisco’s participation here.
  • Jake Howering of Cisco (nice guy, met him a few times) has a write-up on an interesting combination of technologies: ACE (load balancing) plus OTV (data center interconnect), with a small dash of VMware vCenter API integration.

I think that’s going to do it for this Networking Edition of Technology Short Take #12. I’d love to hear your thoughts, suggestions, or corrections about anything I’ve mentioned here, so feel free to join the discussion in the comments. Thanks for reading!

Tags: , , , ,

A little over a month ago, I was installing VMware ESXi on a Cisco UCS blade and noticed something odd during the installation. I posted a tweet about the incident. Here’s the text of the tweet in case the link above stops working:

Interesting…this #UCS blade has local disks but all disks are showing as remote during #ESXi install. Odd…

Several people responded, indicating they’d run into similar situations. No one—at least, not that I recall—was able to tell me why this was occurring, only that they’d seen it happen before. And it wasn’t just limited to Cisco UCS blades; a few people posted that they’d seen the behavior with other hardware, too.

This morning, I think I found the answer. While reading this post about scratch partition best practices on VMware ESXi Chronicles, I clicked through to a VMware KB article referenced in the post. The KB article discussed all the various ways to set the persistent scratch location for ESXi. (Good article, by the way. Here’s a link.)

What really caught my attention, though, was a little blurb at the bottom of the KB article in reference to examples where scratch space may not be automatically defined on persistent storage. Check this out (emphasis mine):

2.  ESXi deployed in a Boot from SAN configuration or to a SAS device. A Boot from SAN or SAS LUN is considered Remote, and could potentially be shared among multiple ESXi hosts. Remote devices are not used for scratch to avoid collisions between multiple ESXi hosts.

There’s the answer: although these drives are physically inside the server and are local to the server, they are considered remote during the VMware ESXi installation because they are SAS drives. Mystery solved!

Tags: , , , , ,

How’s that for acronyms?

In all seriousness, though, as I was installing VMware ESXi this evening onto some remote Cisco UCS blades, I ran into some interesting keymapping issues and I thought it might be handy to document what worked for me in the event others run into this issue as well.

So here’s the scenario: I’m running Mac OS X 10.6.7 on my MacBook Pro, and using VMware View 4.6 to connect to a remote Windows XP Professional desktop. Within that Windows XP Professional session, I’m running Cisco UCS Manager 1.4(1i) and loading up the KVM console to access the UCS blades. From there, I’m installing VMware ESXi onto the blades from a mapped ISO file.

What I found is that the following keystrokes worked correctly to pass through these various layers to the ESXi install process:

  • For the F2 key (necessary to log in to the ESXi DCUI), use Ctrl+F2 (in some places) or Cmd+F2 (in other places).
  • For the F5 key (to refresh various displays), the F5 key alone works.
  • For the F11 key (to confirm installation at various points during the ESXi install process), use Cmd+F11.
  • For the F12 key (used at the DCUI to shutdown/reboot), use Cmd+F12.

There are a couple of factors that might affect this behavior:

  • In the Keyboard section of System Preferences, I have “Use F1, F2, etc., keys as standard function keys” selected; this means that I have to use the Fn key to access any “special” features of the function keys (like increasing volume or adjusting screen brightness). I haven’t tested what impact this has on this key mapping behavior.
  • The Mac keyboard shortcuts in the preferences of the Microsoft Remote Desktop Connection do not appear to conflict with any of the keystrokes listed above, so it doesn’t appear that this is part of the issue.

If I find more information, or if I figure out why the keystrokes are mapping the way they are I’ll post an update to this article. In the meantime, if you happen to need to install VMware ESXi into a Cisco UCS blade via the UCSM KVM through VMware View from a Mac OS X endpoint, now you know how to make the keyboard shortcuts work.

Courteous comments are always welcome—speak up and contribute to the discussion!

Tags: , , , , , ,

That’s right folks, it’s time for another installation of Technology Short Takes. This is Technology Short Take #11, and I hope that you find the collection of items I have for you this time around both useful and informative. But enough of the introduction—let’s get to the good stuff!

Networking

  • David Davis (of Train Signal) has a good write-up on the Petri IT Knowledgebase on using a network packet analyzer with VMware vSphere. The key, of course, is enabling promiscuous mode. Read the article for full details.
  • Jason Nash instructs you on how to enable jumbo frames on the Nexus 1000V, in the event you’re interested. Jason also has good coverage of the latest release of the Nexus 1000V; worth reading in my opinion. Personally, I’d like Cisco to get to version numbers that are a bit simpler than 4.2(1) SV1(4).
  • Now here’s a link that is truly useful: Greg Ferro has put together a list of Cisco IOS CLI shortcuts. That’s good stuff!
  • There are a number of reasons why I have come to generally recommend against link aggregation in VMware vSphere environments, and Ivan Pepelnjak exposes another one that rears its head in multi-switch environments in this article. With the ability for vSphere to utilize all the uplinks without link aggregation, the need to use link aggregation isn’t nearly as paramount, and avoiding it also helps you avoid some other issues as well.
  • Ivan also tackles the layer 2 vs. layer 3 discussion, but that’s beyond my pay grade. If you’re a networking guru, then this discussion is probably more your style.
  • This VMware KB article, surprisingly enough, seems like a pretty good introduction to private VLANs and how they work. If you’re not familiar with PVLANs, you might want to give this a read.

Servers

  • Want to become more familiar with Cisco UCS, but don’t have an actual UCS to use? Don’t feel bad, I don’t either. But you can use the Cisco UCS Emulator, which is described in a bit more detail by Marcel here. Very handy!

Storage

  • Ever find yourself locked out of your CLARiiON because you don’t know or can’t remember the username and password? OK, maybe not (unless you inherited a system from your predecessor), but in those instances this post by Brian Norris will come in handy.
  • Fabio Rapposelli posted a good write-up on the internals of SSDs, in case you weren’t already aware of how they worked. As SSDs gain traction in many different areas of storage, knowing how SSDs work helps you understand where they are useful and where they aren’t.
  • Readers that are new to the storage space might find this post on SAN terminology helpful. It is a bit specific to Cisco’s Nexus platform, but the terms are useful to know nevertheless.
  • If you like’s EMC’s Celerra VSA, you’ll also like the new Uber VSA Guide. See this post over at Nick’s site for more details.
  • Fellow vSpecialist Tom Twyman posted a good write-up on installing PowerPath/VE. It’s worth reading if you’re considering PP/VE for your environment.
  • Joe Kelly of Varrow posted a quick write-up about VPLEX and RecoverPoint, in which he outlines one potential issue with interoperability between VPLEX and RecoverPoint: how will VPLEX data mobility affect RP? For now, you do need to be aware of this potential issue. For more information on VPLEX and RecoverPoint together, I’d also suggest having a look at my write-up on the subject.
  • I won’t get involved in the discussion around Open FCoE (the software FCoE stack announced a while back); plenty of others (J Metz speaks out here, Chad Sakac weighed in here, Ivan Pepelnjak offers his opinions here, and Wikibon here) have already thrown in. Instead, I’ll take the “Show me” approach. Intel has graciously offered me two X520 adapters, which I’ll run in my lab next to some Emulex CNAs. From there, we’ll see what the differences are under the same workloads. Look for more details from that testing in the next couple of months (sorry, I have a lot of other projects on my plate).
  • Jason Boche has been working with Unisphere, and apparently he likes the Unisphere-VMware integration (he’s not alone). Check out his write-up here.

Virtualization

  • For the most part, a lot of people don’t have to deal with SCSI reservation conflicts any longer. However, they can happen (especially in older VMware Infrastructure 3.x environments), and in this post Sander Daems provides some great information on detecting and resolving SCSI reservation conflicts. Good write-up, Sander!
  • If you like the information vscsiStats gives you but don’t like the format, check out Clint Kitson’s PowerShell scripts for vscsiStats.
  • And while we’re talking vscsiStats, I would be remiss if I didn’t mention Gabe’s post on converting vscsiStats data into Excel charts.
  • Rynardt Spies has decided he’s going Hyper-V instead of VMware vSphere. OK, only in his lab, and only to learn the product a bit better. While we all agree that VMware vSphere far outstrips Hyper-V today, Rynardt’s decision is both practical and prudent. Keep blogging about your experiences with Hyper-V, Rynardt—I suspect there will be more of us reading them than perhaps anyone will admit.
  • Brent Ozar (great guy, by the way) has an enlightening post about some of the patching considerations around Azure VMs. All I can say is ouch.
  • The NIST has finally issued the final version of full virtualization security guidelines; see the VMBlog write-up for more information.
  • vCloud Connector was announced by VMware last week at Partner Exchange 2011 in Orlando. More information is available here and here.
  • Arnim van Lieshout posted an interesting article on how to configure EsxCli using PowerCLI.
  • Sander Daems gets another mention in this installation of Technology Short Takes, this time for good information on an issue with ESXi and certain BIOS revisions of the HP SmartArray 410i array controller. The fix is an upgrade to the firmware.
  • Sean Clark did some “what if” thinking in this post about the union of NUMA and vMotion to create VMs that span multiple physical servers. Pretty interesting thought, but I do have to wonder if it’s not that far off. I mean, how many people saw vMotion coming before it arrived?
  • The discussion of a separate “management cluster” has been getting some attention recently. First was Scott Drummonds, with this post and this follow up. Duncan responded here. My take? I’ll agree with Duncan’s final comment that “an architect/consultant will need to work with all the requirements and constraints”. In other words, do what is best for your situation. What’s right for one customer might not be right for the next.
  • And speaking of vShield, be sure to check out Roman Tarnavski’s post on extending vShield.
  • Interested in knowing more about how Hyper-V does CPU scheduling? Ben Armstrong is happy to help out, with Part 1 and Part 2 of CPU scheduling with Hyper-V.
  • Here’s a good write-up on how to configure Pass-Through Switching (PTS) on UCS. This is something I still haven’t had the opportunity to do myself. It kind of helps to actually have a UCS for stuff like this.

It’s time to wrap up now; I think I’ve managed to throw out a few links and articles that someone should find useful. As always, feel free to speak up in the comments below.

Tags: , , , , , , , , , ,

Welcome to Technology Short Take #9, the last Technology Short Take for 2010. In this Short Take, I have a collection of links and articles about networking, servers, storage, and virtualization. Of note this time around: some great DCI links, multi-hop FCoE finally arrives (sort of), a few XenServer/XenDesktop/XenApp links, and NTFS defragmentation in the virtualized data center. Here you go—enjoy!

Networking

  • Brad Hedlund has a great post discussing Nexus 7000 connectivity options for Cisco UCS. I’ll include it in this section since it focuses more on the networking aspect rather than UCS. I haven’t had the time to read the full PDF linked in Brad’s article, but the other topics he discusses in the post—FabricPath networks, F1 vs. M1 linecards, and FCoE connectivity—are great discussions. I’m confident the PDF is equally informative and useful.
  • This UCS-specific post describes how northbound Ethernet frame flows work. Very useful information, especially if you are new to Cisco UCS.
  • Data Center Interconnect (DCI) is a hot topic these days considering that it is a key component of long-distance vMotion (aka vMotion at distance). Ron Fuller (who I had the pleasure of meeting in person a few weeks ago, great guy), aka @ccie5851 on Twitter and one of the authors of NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures (available from Amazon), wrote a series on the various available DCI options such as EoMPLS, VPLS, A-VPLS, and OTV. If you’re considering DCI—especially if you’re a non-networking guy and need to understand the impact of DCI on the networking team—this series of articles is worth reading. Part 1 is here and part 2 is here.
  • And while we are discussing DCI, here’s a brief post by Ivan Pepelnjak about DCI encryption.
  • This post was a bit deep for me (I’m still getting up to speed on the more advanced networking topics), but it seemed interesting nevertheless. It’s a how-to on redistributing routes between VRFs.
  • Optical or twinax? That’s the question discussed by Erik Smith in this post.
  • Greg Ferro also discusses cabling in this post on cabling for 40 Gigabit and 100 Gigabit Ethernet.

Servers

  • As you probably already know, Cisco released version 1.4 of the UCS firmware. This version incorporates a number of significant new features: support for direct-connected storage, support for incorporating C-Series rack-mount servers into UCS Manager (via a Nexus 2000 series fabric extender connected to the UCS 61×0 fabric interconnects), and more. Jeremy Waldrop has a brief write-up that lists a few of his favorite new features.
  • This next post might only be of interest to partners and resellers, but having been in that space before joining EMC I fully understand the usefulness of having a list of references and case studies. In this case, it’s a list of case studies and references for Cisco UCS, courtesy of M. Sean McGee (who I hope to meet in person in St. Louis in just a couple of weeks).

Storage

Virtualization

  • Using XenServer and need to support multicast? Look to this article for the information on how to enable multicast with XenServer.
  • A couple of colleagues over at Intel (I worked with Brian on one of his earlier white papers) forwarded me the link to their latest Ethernet virtualization white paper, which discusses the use of 10 Gigabit Ethernet with VMware vSphere. You can find the link to the latest paper in this blog entry.
  • Bhumik Patel has a good write-up on the “behind-the-scenes” technical details that went into the Cisco-Citrix design guides around XenDesktop/XenApp on Cisco UCS. Bhumik provides the details on things like how many blades were using in the testing, what the configuration of the blades was, and what sort of testing was performed.
  • Thinking of carving your storage up into guest OS datastores for VMware? You might want to read this first for some additional considerations.
  • I know that this has seen some traffic already, but I did want to point out Eric Sloof’s post on the Xenoss XenPack for ESXTOP. I haven’t had the opportunity to use it yet, but would certainly love to hear from anyone who has. Feel free to share your experiences in the comments.
  • As is usually the case, Duncan Epping has had some great posts over the last few weeks. His post on shares set on resource pools highlights the need to adjust the shares value (and other resource constraints) based on the contents of the pool, something that many people forget to do. He also provides a breakdown of the various vCenter memory statistics, and discusses an issue with binding a Provider vDC directly to an ESX/ESXi host.
  • PowerCLI 4.1.1 has some improvements for VMware HA clusters which are detailed in this VMware vSphere PowerCLI Blog entry.
  • Frank Denneman has three articles which have caught my attention over the last few weeks. (All his stuff is good, by the way.) First is his two-part series on the impact of oversized virtual machines (part 1 and part 2). Some of the impacts Frank discusses include memory overhead, NUMA architectures, shares values, HA slot size, and DRS initial placement. Apparently a part 3 is planned but hasn’t been published yet (see some of the comments in part 2). Also worth a read is Frank’s recent post on node interleaving.
  • Here’s yet another tool in your toolkit to help with the transition to ESXi: a post by Gabe on setting logfile location, swap file, SNMP, and vmkcore partition in ESXi.
  • Here’s another guide to creating a bootable ESXi USB stick (on Windows). Here’s my guide to doing it on Mac OS X.
  • Jon Owings had an idea about dynamic cluster pooling. This is a pretty cool idea—perhaps we can get VMware to include it in the next major release of vSphere?
  • Irritated that VMware disabled copy-and-paste between the VM and the vSphere Client in vSphere 4.1? Fix it with these instructions.
  • This white paper on configuration examples and troubleshooting for VMDirectPath was recently released by VMware. I haven’t had the chance to read it yet, but it’s on my “to read” list. I’ll just have a look at that in my copious free time…
  • David Marshall has posted on VMblog.com a two-part series on how NTFS causes I/O bottlenecks on virtual machines (part 1 and part 2). It’s a great review of NTFS and how Microsoft’s file system works. Ultimately, the author of the posts (Robert Nolan) sets the readers up for the need for NTFS defragmentation in order to reduce the I/O load on virtualized infrastructures. While I do agree with Mr. Nolan’s findings in that regard, there are other considerations that you’ll also want to include. What impact will defragmentation have on your storage array? For example, I think that NetApp doesn’t recommend using defragmentation in conjunction with their storage arrays (I could be wrong; can anyone confirm?). So, I guess my advice would be to do your homework, see how defragmentation is going to affect the rest of your environment, and then proceed from there.
  • Microsoft thinks that App-V should be the most important tool in your virtualization tool belt. Do you agree or disagree?
  • William Lam has instructions for how to identify the origin of a vSphere login. This might not be something you need to do on a regular basis, but when you do need to do it you’ll be thankful you have the instructions how.

I guess it’s time to wrap up now, since I have likely overwhelmed you with a panoply of data center-related tidbits. As always, I encourage your feedback, so please feel free to speak up in the comments. Thanks for reading!

Tags: , , , , , , , , , , ,

« Older entries