HP

You are currently browsing articles tagged HP.

Welcome to Technology Short Take #33, the latest in my irregularly-published series of articles discussing various data center technology-related links, articles, rants, thoughts, and questions. I hope that you find something useful here. Enjoy!

Networking

  • Tom Nolle asks the question, “Is virtualization reality even more elusive than virtual reality?” It’s a good read; the key thing that I took away from it was that SDN, NFV, and related efforts are great, but what we really need is something that can pull all these together in a way that customers (and providers) reap the benefits.
  • What happens when multiple VXLAN logical networks are mapped to the same multicast group? Venky explains it in this post. Venky also has a great write-up on how the VTEP (VXLAN Tunnel End Point) learns and creates the forwarding table.
  • This post by Ranga Maddipudi shows you how to use App Firewall in conjunction with VXLAN logical networks.
  • Jason Edelman is on a roll with a couple of great blog posts. First up, Jason goes off on a rant about network virtualization, briefly hitting topics like the relationship between overlays and hardware, the role of hardware in network virtualization, the changing roles of data center professionals, and whether overlays are the next logical step in the evolution of the network. I particularly enjoyed the snippet from the post by Bill Koss. Next, Jason dives a bit deeper on the relationship between network overlays and hardware, and shares his thoughts on where it does—and doesn’t—make sense to have hardware terminating overlay tunnels.
  • Another post by Tom Nolle explores the relationship—complicated at times—between SDN, NFV, and the cloud. Given that we define the cloud (sorry to steal your phrase, Joe) as elastic, pooled resources with self-service functionality and ubiquitous access, I can see where Tom states that to discuss SDN or NFV without discussing cloud is silly. On the flip side, though, I have to believe that it’s possible for organizations to make a gradual shift in their computing architectures and processes, so one almost has to discuss these various components individually, because to tie them all together makes it almost impossible. Thoughts?
  • If you haven’t already introduced yourself to VXLAN (one of several draft protocols used as an overlay protocol), Cisco Inferno has a reasonable write-up.
  • I know Steve Jin, and he’s a really smart guy. I must disagree with some of his statements regarding what software-defined networking is and is not and where it fits, written back in April. I talked before about the difference between network virtualization and SDN, so no need to mention that again. Also, the two key flaws that Steve identifies—single point of failure and scalability—aren’t flaws with SDN/network virtualization, but rather flaws in an implementation of said technologies, IMHO.

Servers/Hardware

  • Correction from the last Technology Short Take—I incorrectly stated that the HP Moonshot offerings were ARM-based, and therefore wouldn’t support vSphere. I was wrong. The servers (right now, at least) are running Intel Atom S1260 CPUs, which are x86-based and do offer features like Intel VT-x. Thanks to all who pointed this out, and my apologies for the error!
  • I missed this on the #vBrownBag series: designing HP Virtual Connect for vSphere 5.x.

Security

Cloud Computing/Cloud Management

  • Hyper-V as hypervisor with OpenStack Compute? Sure, see here.
  • Cody Bunch, who has been focusing quite a bit on OpenStack recently, has a nice write-up on using Razor and Chef to automate an OpenStack build. Part 1 is here; part 2 is here. Good stuff—keep it up, Cody!
  • I’ve mentioned in some of my OpenStack presentations (see SpeakerDeck or Slideshare) that a great place to start if you’re just getting started is DevStack. Here, Brent Salisbury has a nice write-up on using DevStack to install OpenStack Grizzly.

Operating Systems/Applications

  • Boxen, a tool created by GitHub to manage their OS X Mountain Lion laptops for developers, looks interesting. Might be a useful tool for other environments, too.
  • If you use TextMate2 (I switched to BBEdit a little while ago after being a long-time TextMate user), you might enjoy this quick post by Colin McNamara on Puppet syntax highlighting using TextMate2.

Storage

  • Anyone have more information on Jeda Networks? They’ve been mentioned a couple of times on GigaOm (here and here), but I haven’t seen anything concrete yet. Hey, Stephen Foskett, if you’re reading: get Jeda Networks to the next Tech Field Day.
  • Tim Patterson shares some code from Luc Dekens that helps check VMFS version and block sizes using PowerCLI. This could come in quite handy in making sure you know how your datastores are configured, especially if you are in the midst of a migration or have inherited an environment from someone else.

Virtualization

  • Interested in using SAML and Horizon Workspace with vCloud Director? Tom Fojta shows you how.
  • If you aren’t using vSphere Host Profiles, this write-up on the VMware SMB blog might convince you why you should and show you how to get started.
  • Michael Webster tackles the question: is now the best time to upgrade to vSphere 5.1? Read the full post to see what Michael has to say about it.
  • Duncan points out an easy error to make when working with vSphere HA heartbeat datastores in this post. Key takeaway: sometimes the fix is a lot simpler than we might think at first. (I know I’m guilty of making things more complicated than they need to be at times. Aren’t we all?)
  • Jon Benedict (aka “Captain KVM”) shares a script he wrote to help provide high availability for RHEV-M.
  • Chris Wahl has a nice write-up on using log shipping to protect your vCenter database. It’s a bit over a year old (surprised I missed it until now), and—as Chris points out—log shipping doesn’t protect the database (primary and secondary copies) against corruption. However, it’s better than nothing (which I suspect it what far too many people are using).

Other

  • If you aspire to be a writer—whether that be a blogger, author, journalist, or other—you might find this article on using the DASH method for writing to be helpful. The six tips at the end of the article are especially helpful, I think.

Time to wrap this up for now; the rest will have to wait until the next Technology Short Take. Until then, feel free to share your thoughts, questions, or rants in the comments below. Courteous comments are always welcome!

Tags: , , , , , , , , , , , , , ,

This is a session titled “OpenStack Back to the Enterprise: Keep Calm and Boldly Go.” The session is led by Florian Otel (@florianotel on Twitter), with HP Cloud Services in EMEA. The purpose of this talk is to share some of the “lessons learned” in how to position OpenStack to enterprise customers and overcome their objections.

Florian starts out the presentation with a slide that says, “This is a business, not a science project.” He re-iterates that this session is about making business sense. He also assures us that this presentation won’t be a glitzy marketing session, either—it will be real, nitty gritty, “in the trenches” knowledge learned when positioning OpenStack to enterprise customers. Finally, Florian acknowledges that his presentation will probably be a bit biased toward service provider-type use cases.

The presentation goes on to display a picture of Geoffrey Moore, who wrote a book titled “Crossing the Chasm”. Florian ties this to the adoption curves of various technologies and Moore’s assertion in his book that “we need to very mindful of the customers in the market”. Specifically, marketing to the early adopters (on the left edge of the bell curve) and the mainstream (the bulge of the bell curve) is very different.

Next, the presenter shows us a picture of Clayton Christiansen, who wrote (among other books) “The Innovator’s Dilemma.” The conclusion drawn in the book is that there are two types of innovation: sustaining innovation and disruptive innovation.

Florian ties these two thoughts together in a chart the combines the adoption curve with the adoption/evolution of OpenStack as a disruptive innovation.

So how does one pitch OpenStack to an enterprise organization? Florian shares this quote: “Never try to sell a meteor to a dinosaur. It wastes your time and annoys the dinosaur.”

If that’s not the right way, then what is? Florian makes the “dreaded Linux-OpenStack comparison,” combining it with models and charts from Moore’s “Crossing the Chasm.” Florian posits that a key adoption point is that the underlying platform—be it Linux or OpenStack—must “become irrelevant.” He points to Comcast’s demo (which is powered by OpenStack) and asks, “Did anyone see OpenStack there?”

Florian goes to another quote from Moore stating that applications have an advantage over platforms when it comes to crossing the chasm. Moore believes that “platforms must be garbed in application clothing” in order to cross the chasm. In other words, “mind the gap” between applications and platforms.

The next slide in Florian’s presentation says this: “The more I love the idea, the less money it makes!!” The key point to take away is that as technologists we often “fall in love” with a technology/project/platform, but we need to be able to articulate the value of this technology/project/platform in some way other than “it’s a really cool technology.” This aligns very closely with my own thinking—we need to adopt some practicality if we want to see the technology/project/platform we love so much actually succeed.

Florian now moves from abstract and theoretical applications and moves into a more concrete discussion of various use cases for HPCS (HP Cloud Services). These use cases include archival, collaboration, “cloud bursting,” dev/test PaaS, and production applications. He delves in a bit deeper on one particular use case, to which he refers as “Dropbox for the enterprise.”

Next the presenter shares a warning: “All good ideas must die—so that great ideas might live.” Good use cases are going to die and pass away, but new (potentially even better) use cases will emerge. We mustn’t get “tied” to our existing use cases.

There are fundamentally three different areas where a company can focus:

  • Operational excellence
  • Product leadership
  • Customer intimacy

Florian says he believes that one lesson HP learned is that customer intimacy is critically important. He didn’t say, but I suspect that customer intimacy is important at earlier stages of market adoption (going back to the bell curve of market adoption), while other areas of focus might be more important at other stages of adoption.

According to Florian, it’s called bleeding edge for a reason. Be ready to help your customers that hurt themselves. It’s also important to “not get in your own way.” Be willing to admit when you’re wrong, press the Reset button, and press forward with customer needs in the forefront of your vision.

The secret to success is, according to Florian, simple: “Just learn to use OpenStack the way Hendrix uses his guitar.”

Tags: ,

Exclusion or Not?

A couple days ago I read Stephen Foskett’s article “Alas, VMware, Whither HDS?”, and I felt like I really needed to respond to this growing belief—stated in Stephen’s article and in the sources to his article—that VMware is, for whatever reason, somehow excluding certain storage vendors from future virtualization-storage integration development. From my perspective, this is just bogus.

As far as I can tell, Stephen’s post—which is just one of several I’ve seen on this subject—is based on two sources: my session blog of VSP3205 and an article by The Register. I wrote the session blog, I sat in the session, and I listened to the presenters. Never once did one of the presenters indicate that the five technology partners that participated in this particular demonstration were the only technology partners with whom they would work moving forward, and my session blog certainly doesn’t state—or even imply—that VMware will only work with a limited subset of storage vendors. In fact, the thought that other storage vendors would be excluded never even crossed my mind until the appearance of The Register’s post. That invalidates my VSP3205 session blog as a credible source for the assertion that VMware would be working with only certain storage companies for this initiative.

The article at The Register cites my session blog and a post by Wikibon analyst David Floyer as a source. I’ve already shown how my blog doesn’t support the claim that some vendors will be excluded, but what about the other source? The Wikibon article states this:

Wikibon understands that VMware plans to work with the normal storage partners (Dell, EMC, Hewlett Packard, IBM, and NetApp) to provide APIs to help these traditional storage vendors add value, for example by optimizing the placement of storage on the disks.

This statement, however, is not an indication that VMware will work only with the listed storage vendors. (Floyer does not, by the way, cite any sources for that statement.)

Considering all this information, the only place that is implying VMware will limit the storage vendors with whom they will work is Chris Mellor at The Register. However, even Chris’ article quotes a VMware spokesperson who says:

“Note that we’re still in early days on this and none of the partners above have yet committed to support the APIs – and while it is our intent to make the APIs open, currently that is not the case given that what was demo’d during this VMworld session is still preview technology.”

In other words, just because HDS or any other vendor didn’t participate (which might indicate that the vendor chose not to participate) does not mean that they are somehow excluded from future inclusion in the development of this proposed new storage architecture. In fact, participation—or lack thereof—at this stage really means nothing, in my opinion. If this proposed storage architecture gets its feet under it and starts to run, then I’m confident VMware will allow any willing storage vendor to participate. In fact, it would be detrimental to VMware to not allow any willing storage partner to participate.

However, it gets more attention if you proclaim that a particular storage vendor was excluded; hence, the title (and subtitle) that The Register used. I have a feeling the reality is probably quite different than the picture painted in some of these articles.

Tags: , , , , , , ,

Welcome to Virtualization Short Take #39! This is my latest (as of May 3, 2010) collection of virtualization-related articles, links, and thoughts. I hope you find something useful buried in this random collection of bits of information I’ve stumbled across over the past few weeks.

  • Dave Lawrence aka “the VMguy” had a recent post on Changed Block Tracking and why you (should) care. The difference that CBT makes in backups, replication, and other storage-related tasks can be notable, but remember that you’ll need to upgrade your VMs to VM hardware version 7 first.
  • If you running HP Virtual Connect with VMware vSphere, be sure to check out this post about a potential failover failure. According to the post, the problem can be resolved by running newer versions of the HP Virtual Connect firmware, the NIC driver, and the NIC bootcode; see the article for the full details.
  • Have you been visiting the Everything VMware at EMC community? If you’re like me, RSS feeds for areas like this are invaluable. So, here’s a page with all the RSS feeds for the Everything VMware at EMC community. Enjoy!
  • In Part 3 of a series of posts about Hyper-V’s dynamic memory feature, Jeff Woolsey continues to methodically lay out Microsoft’s position on advanced memory technologies and their use in virtualized environments. (There’s also a Part 4, which is a follow-up/Q&A from Part 3.) Jeff provides some great technical information in this post, but to be honest I’m ready for him to just lay out exactly how dynamic memory is going to work.
  • In the articles mentioned above, Jeff mentions that Address Space Layout Randomization (ASLR) should have a negative impact on VMware’s transparent page sharing (TPS). Matt Liebowitz decided to test it and posted the results of his testing. Turns out that—right now, anyway—there is no measurable difference in memory savings due to ASLR.
  • If you’ve been hiding under a rock for a while, you might not have seen the news that VMware finally released the vSphere 4.0 security hardening guide. Now you know.
  • Dave Rose posted a guide on how to incorporate support for the VMXNET2 and VMXNET3 NICs into a PXEBOOT/Kickstart environment. It goes a bit deep for me (I’m not a Linux expert, just a tinkerer), but it appears to be good information.
  • Anyone tested Tom Howarth’s instructions on how to remove FT from a host without vCenter?
  • In reviewing the weekly VMware KB digest, I found a few interesting articles published this past week. The one that really caught my eye, though, was this VMware KB article that provides additional information on the NIC configuration maximums for ESX/ESXi 4.0 and 4.0 U1.
  • Jason Thomasser wrote up a good post on how to install ESX4 from USB using unetbootin.
  • Joe Onisick has a good write-up about the underlying technologies used in HP Virtual Connect Flex-10, the Cisco Virtual Interface Controller (VIC), and the Cisco Nexus 1000V.
  • I also wanted to highlight a few “best practices” documents that I spotted recently in the VMware Communities. These are not new documents by any stretch of the imagination, but they are useful for people who are newer to the virtualization scene. First is this SQL Server best practices document, prepared by the well-known performance guru Scott Drummonds. Also by Scott Drummonds is this web server best practices document (plus an IIS-specific document and an Apache-specific document). There’s also a best practices document for Lotus Domino and a best practices document for Oracle.

Well, that’s it this time around. Feel free to share any useful links or posts in the comments below (courteous comments are always welcome).

Tags: , , , , ,

Welcome to Virtualization Short Take #34, my occasionally-weekly collection of virtualization-related links, posts, and comments. As usual, this information is a hodge-podge of information I’ve gathered from across the Internet over the last few weeks. I hope that you find something useful or helpful here, and thanks for reading!

  • First up is Arne Fokkema’s PowerCLI script to check Windows VM partition alignment. As one commenter pointed out, the fact that the starting offset isn’t 65536—which is what Arne’s script checks—doesn’t necessarily mean that it isn’t aligned. Generally, you can align a Windows partition by setting the starting offset to any number that is evenly divisible by 4096 (4K). If I’m not mistaken, setting the partition offset to 65536 (64K) also ensures that the partition is stripe-aligned on EMC arrays.
  • Here’s a useful reminder to be sure to keep your dependencies in mind when designing VMware vSphere 4 environments. If you design your environment to rely upon DNS—a common situation, since VMware HA is particularly sensitive to name resolution—then be sure to appropriately architect the DNS infrastructure. This “circular dependency” is one reason why I personally tend to keep vCenter Server on a physical system. Otherwise, you have the virtualization management solution running on the infrastructure it is responsible for managing. (Yes, I know that it’s fully supported for it to be virtualized and such.)
  • Forbes Guthrie’s article on incorporating Active Directory authentication and sudo into the kickstart process is a good read. With regard to his note about enabling root SSH access because of an inability to access the Active Directory DCs: I know that in ESX 3.x you could still log in at the Emergency Console when Active Directory connectivity was unavailable; does anyone know if this is still the case with ESX 4.0? I haven’t taken the time to test it yet.
  • Oh, and speaking of Active Directory authentication, Forbes also published this note about Likewise AD authentication supposedly included in ESX 4.1. Looks like someone at Likewise accidentally spilled the beans…
  • I’m sure that everyone has seen the article by Duncan about the ESX 3.x bug that prevents NIC teaming load balancing from working on the global vSwitch configuration, but if you haven’t—well, now you have. Here’s the corresponding KB article, also linked from Duncan’s article. Duncan also recently published a note about an error while installing vCenter Server that is related to permissions; read it here.
  • Are there even better days ahead for virtualization and those involved in virtualization? David Greenfield of Network Computing seems to think so. The comments in the article do seem to bear out my statements that virtualization experts now need to move beyond consolidation and start helping customers tackle the Tier 1, high-end applications. I believe that this is going to require more planning, more expertise, and more knowledge of the applications’ behaviors in order to be successful.
  • Stephen Dion of virtuBLOG brings up a compatibility issue with Intel quad-port Gigabit Ethernet network adapters when used with VMware ESX 4.0 Update 1. Anyone have any updates or additional information on this issue?
  • If you’re considering virtualizing Exchange Server 2010 on VMware vSphere, be sure to read Kenneth’s article here about Exchange 2010 DAGs and VMotion. At least live migration isn’t supported on Hyper-V, either.
  • Want to run a VM inside a VM? This post on nested VMs over at the VMware Communities site has some very useful information.
  • Paul Fazzone (who I believe is a product manager for the Nexus 1000V) points out a good point-counterpoint article with Bob Plankers and David Davis that discusses the benefits and drawbacks of the Cisco Nexus 1000V. Both writers make excellent points; I guess the real conclusion is that both options offer value for different audiences. Some organizations will prefer the VMware vSwitch (or Distributed vSwitch); others will find value in the Cisco Nexus 1000V. Choice is a beautiful thing.
  • Jason Boche published some performance numbers for the EMC Celerra NS-120 that he’s recently added to his home “lab” (I use the term “lab” rather loosely here, considering the amount of equipment found there). Not surprisingly, Fibre Channel won out over software iSCSI and NFS, but Jason’s numbers showed a larger gap than many expected. I may have to repeat these tests myself in the EMC lab in RTP to see what sorts of results I see. If only I still had the NS-960 that I used to have at ePlus….sigh.
  • Joep Piscaer has a good post on Raw Device Mappings (RDMs) that definitely worth a read. He’s pulled together a good summary of information on RDMs, such as requirements, limitations, use cases, and frequently asked questions. Good job Joep!
  • Ivo Beerens has a pretty detailed post on multipathing best practices for VMware vSphere 4 with HP EVA storage. The recommendation is to use Round Robin with ALUA and to reduce the IOPS limit to 1. Ivo also presents a possible workaround to the IOPS “random value” bug that Chad Sakac discussed in this post some time ago.
  • Here’s yet another great diagram by Hany Michael, this time on ESX memory management and monitoring.
  • This post tells you how to modify your VMware Fusion configuration files to assign IP addresses for NAT-configured VMs. If you’re familiar with editing dhcpd.conf on a Linux system, the information found here on customizing Fusion should look quite familiar.
  • Back in 2007, I wrote a piece on using link state tracking in blade deployments. This post wasn’t necessarily virtualization focused, but certainly quite applicable to virtualization environments. Recently I saw this article pop up on using link state tracking with VMware ESX environments. It’s good to see more people recommending this functionality, which I feel is quite useful.
  • Congratulations to Mike Laverick of RTFM, who this past week announced that TechTarget is acquiring RTFM and its author, much like TechTarget acquired BrianMadden.com (and its author) last year. Is this a new trend for technical blog authors—build up a readership and then “sell it off” to a digital media company?

Here are some additional links that I stumbled across, but for which I haven’t yet fully assimilated or processed. You might see some more in-depth blog posts about these in the near future as they work their way through my consciousness.

Lab Experiment: Hypervisors (Virtualization Review)
The Backup Blog: Avamar and VMware Backup Revisited
VMware vSphere Capacity IQ Overview – I’m Impressed!

Well, that wraps it up for now. Thanks for reading and feel free to speak out in the comments below.

Tags: , , , , ,

Two technologies that seem to have come to the fore recently are NPIV (N_Port ID Virtualization) and NPV (N_Port Virtualization). Judging just by the names, you might think that these two technologies are the same thing. While they are related in some aspects and can be used in a complementary way, they are quite different. What I’d like to do in this post is help explain these two technologies, how they are different, and how they can be used. I hope to follow up in future posts with some hands-on examples of configuring these technologies on various types of equipment.

First, though, I need to cover some basics. This is unnecessary for those of you that are Fibre Channel experts, but for the rest of the world it might be useful:

  • N_Port: An N_Port is an end node port on the Fibre Channel fabric. This could be an HBA (Host Bus Adapter) in a server or a target port on a storage array.
  • F_Port: An F_Port is a port on a Fibre Channel switch that is connected to an N_Port. So, the port into which a server’s HBA or a storage array’s target port is connected is an F_Port.
  • E_Port: An E_Port is a port on a Fibre Channel switch that is connected to another Fibre Channel switch. The connection between two E_Ports forms an Inter-Switch Link (ISL).

There are other types of ports as well—NL_Port, FL_Port, G_Port, TE_Port—but for the purposes of this discussion these three will get us started. With these definitions in mind, I’ll start by discussing N_Port ID Virtualization (NPIV).

N_Port ID Virtualization (NPIV)

Normally, an N_Port would have a single N_Port_ID associated with it; this N_Port_ID is a 24-bit address assigned by the Fibre Channel switch during the FLOGI process. The N_Port_ID is not the same as the World Wide Port Name (WWPN), although there is typically a one-to-one relationship between WWPN and N_Port_ID. Thus, for any given physical N_Port, there would be exactly one WWPN and one N_Port_ID associated with it.

What NPIV does is allow a single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs, associated with it. After the normal FLOGI process, an NPIV-enabled physical N_Port can subsequently issue additional commands to register more WWPNs and receive more N_Port_IDs (one for each WWPN). The Fibre Channel switch must also support NPIV, as the F_Port on the other end of the link would “see” multiple WWPNs and multiple N_Port_IDs coming from the host and must know how to handle this behavior.

Once all the applicable WWPNs have been registered, each of these WWPNs can be used for SAN zoning or LUN presentation. There is no distinction between the physical WWPN and the virtual WWPNs; they all behave in exactly the same fashion and you can use them in exactly the same ways.

So why might this functionality be useful? Consider a virtualized environment, where you would like to be able to present a LUN via Fibre Channel to a specific virtual machine only:

  • Without NPIV, it’s not possible because the N_Port on the physical host would have only a single WWPN (and N_Port_ID). Any LUNs would have to be zoned and presented to this single WWPN. Because all VMs would be sharing the same WWPN on the one single physical N_Port, any LUNs zoned to this WWPN would be visible to all VMs on that host because all VMs are using the same physical N_Port, same WWPN, and same N_Port_ID.
  • With NPIV, the physical N_Port can register additional WWPNs (and N_Port_IDs). Each VM can have its own WWPN. When you build SAN zones and present LUNs using the VM-specific WWPN, then the LUNs will only be visible to that VM and not to any other VMs.

Virtualization is not the only use case for NPIV, although it is certainly one of the easiest to understand.

<aside>As an aside, it’s interesting to me that VMotion works and is supported with NPIV as long as the RDMs and all associated VMDKs are in the same datastore. Looking at how the physical N_Port has the additional WWPNs and N_Port_IDs associated with it, you’d think that VMotion wouldn’t work. I wonder: does the HBA on the destination ESX/ESXi host have to “re-register” the WWPNs and N_Port_IDs on that physical N_Port as part of the VMotion process?</aside>

Now that I’ve discussed NPIV, I’d like to turn the discussion to N_Port Virtualization (NPV).

N_Port Virtualization

While NPIV is primarily a host-based solution, NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. Consider that every Fibre Channel switch in a fabric needs a different domain ID, and that the total number of domain IDs in a fabric is limited. In some cases, this limit can be fairly low depending upon the devices attached to the fabric. The problem, though, is that you often need to add Fibre Channel switches in order to scale the size of your fabric. There is therefore an inherent conflict between trying to reduce the overall number of switches in order to keep the domain ID count low while also needing to add switches in order to have a sufficiently high port count. NPV is intended to help address this problem.

NPV introduces a new type of Fibre Channel port, the NP_Port. The NP_Port connects to an F_Port and acts as a proxy for other N_Ports on the NPV-enabled switch. Essentially, the NP_Port “looks” like an NPIV-enabled host to the F_Port on the other end. An NPV-enabled switch will register additional WWPNs (and receive additional N_Port_IDs) via NPIV on behalf of the N_Ports connected to it. The physical N_Ports don’t have any knowledge this is occurring and don’t need any support for it; it’s all handled by the NPV-enabled switch.

Obviously, this means that the upstream Fibre Channel switch must support NPIV, since the NP_Port “looks” and “acts” like an NPIV-enabled host to the upstream F_Port. Additionally, because the NPV-enabled switch now looks like an end host, it no longer needs a domain ID to participate in the Fibre Channel fabric. Using NPV, you can add switches and ports to your fabric without adding domain IDs.

So why is this functionality useful? There is the immediate benefit of being able to scale your Fibre Channel fabric without having to add domain IDs, yes, but in what sorts of environments might this be particularly useful? Consider a blade server environment, like an HP c7000 chassis, where there are Fibre Channel switches in the back of the chassis. By using NPV on these switches, you can add them to your fabric without having to assign a domain ID to each and every one of them.

Here’s another example. Consider an environment where you are mixing different types of Fibre Channel switches and are concerned about interoperability. As long as there is NPIV support, you can enable NPV on one set of switches. The NPV-enabled switches will then act like NPIV-enabled hosts, and you won’t have to worry about connecting E_Ports and creating ISLs between different brands of Fibre Channel switches.

I hope you’ve found this explanation of NPIV and NPV helpful and accurate. In the future, I hope to follow up with some additional posts—including diagrams—that show how these can be used in action. Until then, feel free to post any questions, thoughts, or corrections in the comments below. Your feedback is always welcome!

Disclosure: Some industry contacts at Cisco Systems provided me with information regarding NPV and its operation and behavior, but this post is neither sponsored nor endorsed by anyone.

Tags: , , , , ,

Storage Short Take #5

I’ve decided to resurrect my Storage Short Take series, after almost a year since the last one was published. I find myself spending more and more time in the storage realm—which is completely fine with me—and so more and more information coming to me in various forms is related to storage. While I’m far from the likes of storage rockstars such as Robin Harris, Stephen Foskett, Storagebod, and others, hopefully you’ll find something interesting and useful here. Enjoy!

  • This blog post by Frank Denneman on the HP LeftHand product is outstanding. I learned more from this post than a lot of posts recently. Great work Frank!
  • Need a bit more information on FCoE? Nigel Poulton has a great post here (it’s a tad bit older, but I’ve just stumbled across it) with good details for those who might not be familiar with FCoE. It’s worth a read if you haven’t already taken the time to come up to speed on FCoE and its “related” technologies.
  • What led me to Nigel’s FCoE post was this post by Storagezilla in which he rants about “vendor flapheads” who “are intentionally obscuring it’s [FCoE's] limitations”. You’ve got that right! Wanting to present a reasonably impartial and complete view of FCoE was partially the impetus behind my end-to-end FCoE post and the subsequent clarification. Thankfully, I think that the misinformation around FCoE is starting to die down.
  • This post has a bit of useful information on HP EVA path policies and vSphere multipathing. I would have liked a bit more detail than what was provided, but the content is good nevertheless.
  • Devang Panchigar’s recoup of HP TechDay day 1, which focused on HP StorageWorks technologies, has some good information, especially if you aren’t already familiar with some of HP’s various storage platforms.
  • Chad Sakac of EMC has some very useful information on Asymmetric Logical Unit Access (ALUA), VMware vSphere, and EMC CLARiiON arrays. If you using EMC storage with your VMware vSphere 4 environment, and you have a CX4, and you’re running FLARE 28.5 or later, it might be worthwhile to switch your path policy from NMP to Round Robin (RR).
  • Speaking of RR with vSphere, somewhere I remember seeing information on changing the default number of I/Os down a path, and tweaking that for best performance. Was that in Chad’s VMworld session? Anyone remember?
  • If you’re looking for a high-level overview of SAN and NAS virtualization, this InfoWorld article can help you get started. You’ll soon want to delve deeper than this article can provide, but it’s a reasonable starting point, at least.

That’s it for this time around. Feel free to share other interesting or useful links in the comments.

Tags: , , , , , ,

Along with a number of other projects recently, I’ve also been spending time working with HP Virtual Connect Flex-10. You may have seen these (relatively) recent Flex-10 articles:

Using VMware ESX Virtual Switch Tagging with HP Virtual Connect
Using Multiple VLANs with HP Virtual Connect Flex-10
Follow-Up About Multiple VLANs, Virtual Connect, and Flex-10

As I began to work up some documentation for internal use at my employer, I asked myself this question: what are the design considerations for how an architect should configure Flex-10?

Think about it for a moment. In a “traditional” VMware environment, architects will place port groups onto vSwitches (or dvPort groups onto dvSwitches) based on criteria like physical network segregation, number of uplinks, VLAN support, etc. In a Flex-10 environment, those design criteria begin to change:

  • The number of uplinks doesn’t matter anymore, because bandwidth is controlled in the Flex-10 configuration. You want 1.5Gbps for VMotion? Fine, no problem. You want 500Mbps for the Service Console? Fine, no problem. You want 8Gbps for IP-based storage traffic? Fine, no problem. As long as it all adds up to 10Gbps, architects can subdivide the bandwidth however they desire. So the number of uplinks, from a bandwidth perspective, is no longer applicable.
  • Physical network segregation is a non-issue, because all the FlexNICs share the same LOM and will (as far as I know) all share the same uplinks. (In other words, I don’t think that LOM1:a can use one uplink while LOM1:b uses a different uplink.) You’ll physically distinct NICs in order to handle physically segregated networks. Of course, physically segregated networks will present a bit of challenge for blade environments anyway, but that’s beside the point.
  • VLAN support is a bit different, too, because of the fact that you can’t map overlapping VLANs to FlexNICs on the same LOM. In addition, because of the way VLANs work within a Virtual Connect environment, I don’t see VLANs being an applicable design consideration anyway; there’s too much flexibility in how VLANs are presented to servers for that to drive how networking should be set up.

So what are the design considerations for Flex-10 in VMware environments, then? What would drive an architect to specify multiple FlexNICs per LOM instead of just lumping everything together in a single 10Gbps pipe? Is bandwidth the only real consideration? I’d love to hear what others think. Let me hear your thoughts in the comments—thanks!

Tags: , , , ,

One of the things that confused me when I first started working with the Nexus 5000 line was how I would connect this 10Gb Ethernet switch to older 1Gb Ethernet switches, like the older Cisco Catalyst or HP ProCurve switches that I also have in the lab. It turns out—many of you probably already know this—that the first 8 ports on a Nexus 5010 and the first 16 ports on Nexus 5020 can be configured to operate as Gigabit Ethernet ports. You can use these ports to connect to older Gigabit Ethernet switches.

It’s really not too complicated. In this post, I’ll describe the configuration I used to connect a Cisco Catalyst 3560G and an HP ProCurve 2924 to a Cisco Nexus 5010. In both cases, I used a 2-port port channel to link the switches together. The one drawback is that the Nexus doesn’t participate in VTP, so all VLANs have to be manually defined on each switch independently. For my small lab environment, that’s not a showstopper, but it does underscore the fact the Nexus 5000 series is primarily target as access switches.

Here’s the configuration I used on the Nexus:

interface Ethernet1/3
switchport mode trunk
speed 1000
switchport trunk native vlan 999
channel-group 3 mode on

This configuration was repeated on 2 ports for the Cisco Catalyst 3560G and on 2 more ports for the HP ProCurve 2924. Obviously, each of them used a different port channel (channel-group 3 mode on for the Catalyst and channel-group 4 mode on for the ProCurve). Remember that you have to use one of the first 8 (for a Nexus 5010) or the first 16 (for a Nexus 5020) ports because these are the only ports that support setting the speed down to Gigabit Ethernet.

On the Cisco Catalyst 3560G, the configuration is almost identical:

interface GigabitEthernet1/10
switchport mode trunk
switchport trunk native vlan 999
channel-group 2 mode on

This configuration is repeated on two ports (same as the Nexus). Note that the channel-groups don’t have to match between the switches, only within each switch. There’s no need to specify the speed here on the Catalyst, as this is already a Gigabit Ethernet port. We only need to specify the speed on the Nexus because it won’t negotiate down to Gigabit Ethernet.

On the HP ProCurve, the configuration is pretty understandable. First, the trunk command creates the port channel:

trunk 23-24 Trk1 trunk

Then, the VLAN configuration specifies the same native (untagged) VLAN on the port channel:

vlan 999
name "Trunk-Native"
untagged 12,14,20,A2,Trk1
no ip address
exit

Once the configuration is done, you’ll need to insert RJ-45 SFPs (Cisco product number GLC-T, I believe) into the appropriate ports on the Nexus 5000 switch and then cable the switches together. If you didn’t make any typos along the way, then you should be good to go!

Tags: , , ,

I’ll preface this article by saying that I am not (yet) an expert with Cisco’s Unified Computing System (UCS), so if I have incorrect information I’m certainly open to clarification. Some would also accuse me of being a UCS-hater, since I had the audacity to call UCS a blade server (the horror!). Truth is, I’m on the side of the customer, and as we all know there is no such thing as a “one size fits all” solution. Cisco can’t provide one, and HP can’t provide one.

The mudslinging that I’m talking about is taking place between Steve Chambers (formerly with VMware, now with Cisco) and HP. HP published a page with a list of reasons why Cisco UCS should be dismissed, and Steve responded on his personal blog. Here are the links to the pages in question:

The Real Story about Cisco’s “One Giant Switch” view of the Datacenter (this was based, in part at least, on the next link)
Buyer beware of the “one giant switch” data center network model
HP on the run

I thought I might take a few points from these differing perspectives and try to call out some mudslinging that’s occurring on both sides. To be fair, Steve states in the comments to his article that it was intended to be entertaining and light-hearted, so please keep that in mind.

Point #1: Complexity

The reality of these solutions is that they are both equally complex, just in different ways. HP’s BladeSystem Matrix uses reasonably well-understood and mature technologies, while Cisco UCS uses newer technologies that aren’t as widely understood. This is not a knock against either; as I’ve said before in many other contexts and many other situations, there are advantages and disadvantages to every approach. HP’s advantage is that leverages the knowledge and experience that people have with their existing technologies: StorageWorks storage solutions, ProLiant blades, ProCurve networking, and HP software. The disadvantage is that HP is still tied to the same “legacy” technologies.

In building UCS, Cisco’s advantage is that the solution uses the latest technologies (including some that are still Cisco-proprietary) and doesn’t have any ties to “legacy” technologies. The disadvantage, naturally, is that this technological leap creates additional perceived complexity because people have to learn the new technologies embedded within UCS.

Adding to the simple fact that both of these solutions are equally complex in different ways is the fact that you must re-architect your storage in order to gain the full advantage of either solution. To get the full benefit of both UCS and HP BladeSystem Matrix, you need to be doing boot-from-SAN. (Clearly, this doesn’t apply to virtualized systems, but both Cisco and HP are touting their solutions as equally applicable to non-virtualized workloads.) This is a fact that, in my opinion, has been significantly understated.

Neither HP nor Cisco really have the right to proclaim their solution is less complex than the other. Both solutions are complex in their own ways.

Point #2: Standards-Based vs. Proprietary

Again, neither HP nor Cisco really have any room to throw the rock labeled “Proprietary”. Both solutions have their own measure of vendor lock-in. HP is right; you can’t put an HP blade or an IBM blade into a Cisco UCS chassis. Steve Chambers is right; you can’t put a Dell blade or a Cisco blade server into an HP chassis. The reality, folks, is that every vendor’s solution is has a certain amount of vendor lock-in. Does VMware vSphere have vendor lock-in? Sure, but so does Hyper-V and Citrix XenServer. Does Microsoft Windows have vendor lock-in? Of course, but so does…so does…well, you get the idea.

HP says VNTag is proprietary and won’t even work with some of Cisco’s own switches. OK, let’s talk proprietary…does Flex-10 work with other vendor’s switches? The fact of the matter is that both Cisco and HP have their own forms of vendor lock-in and neither can cry “foul” on the other. It’s a draw.

Point #3: The “Giant Network Switch”

At one point in HP’s article (I believe it was under the Complexity heading) they make this point about the network traffic in a Cisco UCS environment:

In Cisco’s one-giant-switch model, all traffic must travel over a physical wire to a physical switch for every operation. Consequently, it appears that traffic even between two virtual servers running next to each other on the same physical would have to traverse the network, making an elaborate “hairpin turn” within the physical switch, only to traverse the network again before reaching the other virtual server on the same physical machine. Return traffic (or a “response” from the second virtual machine) would have to do the same. Each of these packet traversals logically accounts for multiple interrupts, data copies and delays for your multi-core processor.

I do have to call “partial FUD” on this one. In a virtualized environment, even a virtualized environment running the Cisco Nexus 1000V, traffic from one virtual server to another virtual server on the same host never leaves that host. HP’s statement seems to imply that’s not the case, and as far as I know it is. However, HP’s statement is partially true: traffic from one virtual server on one physical host does have to travel to the fabric interconnect and then back again in order to communicate with a virtual server running on a physical host in the same chassis. The fabric extenders don’t provide any switching functionality; that all occurs in the interconnect. Based on the information I’ve seen thus far, I would say that using Cisco’s SR-IOV-based “Palo” adapter and attaching VMs directly to a virtual PCIe NIC would put you into the situation HP is describing, which then just reinforces a question that Brad Hedlund and I tossed back and forth a couple of times: is hypervisor-bypass, aka VMDirectPath, with “Palo” the right design for all environments? In my opinion, no—I again go back to my statement that there is no “one size fits all” solution. And considering that the use of hypervisor-bypass with “Palo” would put you into a situation where traffic between two virtual machines on the same physical host has to travel to the fabric interconnect and back again, I’m even less inclined to use that architecture.

In the end, it’s pretty clear to me that both HP and Cisco have some advantages and disadvantages to their respective solutions, and neither vendor really has the room to label the other as “more complex” or “more proprietary” than the other. But what do you think? Do you agree or disagree? Courteous comments (with full vendor disclosure) are welcome.

Tags: , , , ,

« Older entries