Technology Short Take #7

Welcome to Technology Short Take #7! This time around I have a collection of links from networking, servers, storage, and virtualization. Our hot topics in this issue include Fibre Channel over Ethernet (FCoE) and its need—or lack thereof—for congestion management, Ubuntu on Hyper-V, the benefits of VAAI, and more!

Networking

I have a lot of FCoE-related links this time around. I’m not sure if that means FCoE has been getting more coverage or if it’s just a case of confirmation bias.

  • Need to decrypt a Cisco type 7 password? This page provides instructions on how it can be done. (Please be sure to use your powers for good, not evil.)
  • This blog post catalogue is a link list to a treasure trove of networking information.
  • I suppose this is one way of dealing with requests to do long-distance vMotion. I’m not so sure I agree that it’s an effective way.
  • The use of NIV to create the equivalent of multi-hop FCoE is something I discussed a while ago, but Brad Hedlund recently revisited it again. I can see both sides of the argument—both “for” and “against” considering fabric extenders as multi-hop FCoE—and I can see the need to use standard terminology to describe things. Without standard terminology, “multi-hop FCoE” means different things to different vendors and it’s hard for customers to make valid comparisons.
  • Erik Smith, a relatively new blogger, has a great introduction to FIP, FIP Snooping Bridges, and FCFs. If you’re new to FCoE—or even if you aren’t and want more detail—this is a great read with loads of relevant information. I’m looking forward to more of Erik’s posts on this topic.
  • The blog battle over FCoE’s need for QCN rages on. Joe Onisick does a good job of explaining QCN and why it might/might not be necessary, so if you’re unfamiliar with the debate that’s a good place to start. Ivan Pepelnjak breaks down 802.1Qau (the QCN standard) even further, providing more details on its operation and behavior. He then weights into the debate with this quick explanation and this comparison to Frame Relay. In the end, the answer to the question of FCoE’s need for QCN really boils down to everyone’s favorite IT answer: “It depends.” In this case, it depends upon your network design. With more DCB-capable switches between the end nodes and the FCFs, QCN becomes more valuable. With fewer (or no) DCB-capable switches between the end nodes and the FCFs, QCN offers far less benefit.

Servers

I’m adding this section because I have some articles that apply to servers, but not necessarily to virtualization. Since it fits in nicely with the data center theme of Technology Short Takes, it seems like a reasonable addition.

  • Jeff Allen, a UCS-focused CSE at Cisco, recently posted this guide to SAN boot with Cisco UCS. It’s definitely worth a read, especially if you’re new to UCS or haven’t done boot from SAN with UCS before.
  • I haven’t had nearly the time to blog about Cisco UCS as I would have liked, but Brian Gracely included me in this list of people to follow for Cisco UCS information. Thanks, Brian! I’ll do my best to earn my inclusion on the list.
  • Chris Fendya of WWT posted instructions on how to slipstream the Cisco UCS drivers into the installation of Windows Server 2003.

Storage

It’s funny to me that the storage section of these posts is typically the shortest. There are plenty of storage-related blogs out there, but almost all of them are high-level and tend not to provide the sort of down-to-earth “in the trenches” information I like to include. If readers have any suggestions for blogs that provide this sort of information, I’d love to hear them.

  • InformationWeek recently published this article on how to break free from Tier 1 SAN vendors. (Disclosure: I work for just such a Tier 1 SAN vendor.) I can’t say that I agree with the author’s reasoning; by the same token, customers should be able to go out and buy white box servers. Yet, companies such as HP and Dell are still selling lots of servers. Why is that? Because the value of a top-tier server is greater than the sum of its parts, and the same can be said for Tier 1 storage arrays. Now, having said that, I do agree that storage virtualization—which was the real focus of the InformationWeek article—can bring a lot of value and flexibility to the data center. I just don’t think that storage virtualization and Tier 1 storage arrays are mutually exclusive.
  • Here is a good “how to” on enabling ALUA and Round Robin multipathing with ESX and a CLARiiON CX4 array.
  • Bob Plankers has a great article on the impact of VAAI on storage operations. In this post, he shows how the write rate for his VAAI-capable HDS AMS 2500 drops to nothing when cloning templates. This is a great demonstration of how VAAI helps offload storage operations from the hosts to the array. Keep in mind that VAAI might not make operations faster, but it will make them more efficient. (It’s a subtle distinction, but an important one nevertheless.)
  • In the event you are considering pursuing CCIE Storage—a task that I’ve been strongly considering undertaking—Brian Feeny posted a list of CCIE Storage preparation resources.

Virtualization

That wraps up this installation of Technology Short Takes. As always, your comments, thoughts, suggestions, or clarifications are welcome, so please speak up in the comments!

Tags: , , , , , , , ,

4 comments

  1. DM’s avatar

    If this is “short” I’d hate to see long. :)

  2. Stu Fox’s avatar

    Didn’t I cover that in your last short take? Dynamic memory allows you to allocate more memory than is physically present in the machine, you just don’t want to get to the point where all those machines want to use that physical memory (you really don’t want to get to that point on a VMware host either do you?). You get the benefit of a VM with 4GB allocated but only consuming 2GB, hence the other 2GB is available for other VM’s to use. It certainly has value in increasing density of VM’s on the host where memory has traditionally been a bottleneck. I’m not sure what other bits of “overcommitment” you expect to see?

  3. slowe’s avatar

    Stu, thanks. If you did reply to my earlier post, then I apologize for calling it out again. I’m only human, after all. :-)

    My understanding was that you still could not allocate more memory than the host has installed. That didn’t make any sense to me. Apparently my understanding was incorrect? I do agree with the benefits of memory overcommitment; when used appropriately and judiciously, you can significantly increase VM density without sacrificing performance.

    Thanks again for your response, Stu!

  4. Stu Fox’s avatar

    Replying late as usual.

    Yes, you can easily allocate more memory than a host has installed. As I noted above, you wouldn’t really want to get in a position where VM’s were consuming more physical memory than was present. Looking at my 8GB system I can enable a single machine to have a maximum virtual memory of 64GB. I could also easily assign memory across multiple machines where the sum of the maximums is well beyond the physical memory in the machine. Dynamic memory will handle the allocation of the memory across those machines.

    Cheers & Merry Christmas.

Comments are now closed.