You are currently browsing articles tagged Cisco.

Reader Brian Markussen—with whom I had the pleasure to speak at the Danish VMUG in Copenhagen earlier this month—brought to my attention an issue between VMware vSphere’s health check feature and Cisco UCS when using Cisco’s VIC cards. His findings, confirmed by VMware support and documented in this KB article, show that the health check feature doesn’t work properly with Cisco UCS and the VIC cards.

Here’s a quote from the KB article:

The distributed switch network health check, including the VLAN, MTU, and teaming policy check can not function properly when there are hardware virtual NICs on the server platform. Examples of this include but are not limited to Broadcom Flex10 systems and Cisco UCS systems.

(Ignore the fact that “UCS systems” is redundant.)

According to Brian, a fix for this issue will be available in a future update to vSphere. In the meantime, there doesn’t appear to be any workaround, so plan accordingly.

Tags: , , , , , ,

Welcome to Technology Short Take #38, another installment in my irregularly-published series that collects links and thoughts on data center-related technologies from around the web. But enough with the introduction, let’s get on to the content already!


  • Jason Edelman does some experimenting with the Python APIs on a Cisco Nexus 3000. In the process, he muses about the value of configuration management tool chains such as Chef and Puppet in a world of “open switch” platforms such as Cumulus Linux.
  • Speaking of Cumulus Linux…did you see the announcement that Dell has signed a reseller agreement with Cumulus Networks? I’m pretty excited about this announcement, and I hope that Cumulus sees great success as a result. There are a variety of write-ups about the announcement; so good, many not so good. The not-so-good variety typically refers to Cumulus’ product as an SDN product when technically it isn’t. This article on Barron’s by Tiernan Ray is a pretty good summary of the announcement and some of its implications.
  • Pete Welcher has launched a series of articles discussing “practical SDN,” focusing on the key leaders in the market: NSX, DFA, and the yet-to-be-launched ACI. In the initial installation of the series, he does a good job of providing some basics around each of the products, although (as would be expected of a product that hasn’t launched yet) he has to do some guessing when it comes to ACI. The series continues with a discussion of L2 forwarding and L3 forwarding across the various products. Definitely worth reading, in my opinion.
  • Nick Buraglio takes away all your reasons for not collecting flow-based data from your environment with his write-up on installing nfsen and nfdump for NetFlow and/or sFlow collection.
  • Terry Slattery has a nice write-up on new network designs that are ideally suited for SDN. If you are looking for a primer on “next-generation” network designs, this is worth reviewing.
  • Need some Debian packages for Open vSwitch 2.0? Here’s another article from Nick Buraglio—he has some information to help you out.


Nothing this time, but check back next time.


Nothing from my end. Maybe you have something you’d like to share in the comments?

Cloud Computing/Cloud Management

  • Christian Elsen (who works in Integration Engineering at VMware) has a nice series of articles going on using OpenStack with vSphere and NSX. The series starts here, but follow the links at the bottom of that article for the rest of the posts. This is really good stuff—he includes the use of the NSX vSwitch with vSphere 5.5, and talks about vSphere OpenStack Virtual Appliance (VOVA) as well. All in all, well worth a read in my opinion.
  • Maish Saidel-Keesing (one of my co-authors on the first edition of VMware vSphere Design and also a super-sharp guy) recently wrote an article on how adoption of OpenStack will slow the adoption of SDN. While I agree that widespread adoption of OpenStack could potentially retard the evolution of enterprise IT, I’m not necessarily convinced that it will slow the adoption of SDN and network virtualization solutions. Why? Because, in part, I believe that the full benefits of something like OpenStack need a good network virtualization solution in order to be realized. Yes, some vendors are writing plugins for Neutron that manipulate physical switches. But for developers to get true isolation, application portability, the ability to re-create production environments in development—all that is going to require network virtualization.
  • Here’s a useful OpenStack CLI cheat sheet for some commonly-used commands.

Operating Systems/Applications

  • If you’re using Ansible (a product I haven’t had a chance to use but I’m closely watching), but I came across this article on an upcoming change to the SSH transport that Ansible uses. This change, referred to as “ssh_alt,” promises a significant performance increase for Ansible. Good stuff.
  • I don’t think I’ve mentioned this before, but Forbes Guthrie (my co-author on the VMware vSphere Design books and an already great guy) has a series going on using Linux as a domain controller for a vSphere-based lab. The series is up to four parts now: part 1, part 2, part 3, and part 4.
  • Need (or want) to increase the SCSI timeout for a KVM guest? See these instructions.
  • I’ve been recommending that IT pros get more familiar with Linux, as I think its influence in the data center will continue to grow. However, the problem that I sometimes face is that experienced folks tend to share these “super commands” that ordinary folks have a hard time decomposing. However, this site should make that easier. I’ve tried it—it’s actually pretty handy.


  • Jim Ruddy (an EMCer, former co-worker of mine, and an overall great guy) has a pretty cool series of articles discussing the use of EMC ViPR in conjunction with OpenStack. Want to use OpenStack Glance with EMC ViPR using ViPR’s Swift API support? See here. Want a multi-node Cinder setup with ViPR? Read how here. Multi-node Glance with ViPR? He’s got it. If you’re new to ViPR (who outside of EMC isn’t?), you might also find his articles on deploying EMC ViPR, setting up back-end storage for ViPR, or deploying object services with ViPR to also be helpful.
  • Speaking of ViPR, EMC has apparently decided to release it for free for non-commercial use. See here.
  • Looking for more information on VSAN? Look no further than Cormac Hogan’s extensive VSAN series (up to Part 14 at last check!). The best way to find this stuff is to check articles tagged VSAN on Cormac’s site. The official VMware vSphere blog also has a series of articles running; check out part 1 and part 2.


  • Did you happen to see this news about Microsoft Hyper-V Recovery Manager (HRM)? This is an Azure-hosted service that can be roughly compared to VMware’s Site Recovery Manager (SRM). However, unlike SRM (which is hosted on-premise), HRM is hosted by Microsoft Azure. As the article points out, it’s important to understand that this doesn’t mean your VMs are replicated to Azure—it’s just the orchestration portion of HRM that is running in Azure.
  • Oh, and speaking of Hyper-V…in early January Microsoft released version 3.5 of their Linux Integration Services, which primarily appears to be focused on adding Linux distribution support (CentOS/RHEL 6.5 is now supported).
  • Gregory Gee has a write-up on installing the Cisco CSR 1000V in VirtualBox. (I’m a recent VirtualBox convert myself; I find the vboxmanage command just so very handy.) Note that I haven’t tried this myself, as I don’t have a Cisco login to get the CSR 1000V code. If any readers have tried it, I’d love to hear your feedback. Gregory also has a few other interesting posts I’m planning to review in the next few weeks as well.
  • Sunny Dua, who works with VMware PSO in India, has a series of blog posts on architecting vSphere environments. It’s currently up to five parts; I don’t know how many more (if any) are planned. Here are the links: part 1 (clusters), part 2 (vCenter SSO), part 3 (storage), part 4 (design process), and part 5 (networking).

It’s time to wrap up now before this gets any longer. If you have any thoughts or tidbits you’d like to share, I welcome any and all courteous comments. Join (or start) the conversation!

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #29! This is another installation in my irregularly-published series of links, thoughts, rants, and raves across various data center-related fields of technology. As always, I hope you find something useful here.


  • Who out there has played around with Mininet yet? Looks like this is another tool I need to add to my toolbox as I continue to explore networking technologies like OpenFlow, Open vSwitch, and others.
  • William Lam has a recent post on some useful VXLAN commands found in ESXCLI with vSphere 5.1. I’m a CLI fan, so I like this sort of stuff.
  • I still have a lot to learn about OpenFlow and networking, but this article from June of last year (it appears to have been written by Ivan Pepelnjak) discusses some of the potential scalability concerns around early versions of the OpenFlow protocol. In particular, the use of OpenFlow to perform granular per-flow control when there are thousands (or maybe only hundreds) of flows presents a scalability challenge (for now, at least). In my mind, this isn’t an indictment of OpenFlow, but rather an indictment of the way that OpenFlow is being used. I think that’s the point Ivan tried to make as well—it’s the architecture and how OpenFlow is used that makes a difference. (Is that a reasonable summary, Ivan?)
  • Brad Hedlund (who will be my co-worker starting on 2/11) created a great explanation of network virtualization that clearly breaks down the components and explains their purpose and function. Great job, Brad.
  • One of the things I like about Open vSwitch (OVS) is that it is so incredibly versatile. Case in point: here’s a post on using OVS to connect LXC containers running on different hosts via GRE tunnels. Handy!


  • Cisco UCS is pretty cool in that it makes automation of compute hardware easier through such abstractions as server profiles. Now, you can also automate UCS with Chef. I traded a few tweets with some Puppet folks, and they indicated they’re looking at this as well.
  • Speaking of Puppet and hardware, I also saw a mention on Twitter about a Puppet module that will manage the configuration of a NetApp filer. Does anyone have a URL with more information on that?
  • Continuing the thread on configuration management systems running on non-compute hardware (I suppose this shouldn’t be under the “Servers/Hardware” section any longer!), I also found references to running CFEngine on network apliances and running Chef on Arista switches. That’s kind of cool. What kind of coolness would result from even greater integration between an SDN controller and a declarative configuration management tool? Hmmm…


  • Want full-disk encryption in Ubuntu, using AES-XTS-PLAIN64? Here’s a detailed write-up on how to do it.
  • In posts and talks I’ve given about personal productivity, I’ve spoken about the need to minimize “friction,” that unspoken drag that makes certain tasks or workflows more difficult and harder to adopt. Tal Klein has a great post on how friction comes into play with security as well.

Cloud Computing/Cloud Management

  • If you, like me, are constantly on the search for more quality information on OpenStack and its components, then you’ll probably find this post on getting Cinder up and running to be helpful. (I did, at least.)
  • Mirantis—recently the recipient of $10 million in funding from various sources—posted a write-up in late November 2012 on troubleshooting some DNS and DHCP service configuration issues in OpenStack Nova. The post is a bit specific to work Mirantis did in integrating an InfoBlox appliance into OpenStack, but might be useful in other situation as well.
  • I found this article on Packstack, a tool used to transform Fedora 17/18, CentOS 6, or RHEL 6 servers into a working OpenStack deployment (Folsom). It seems to me that lots of people understand that getting an OpenStack cloud up and running is a bit more difficult than it should be, and are therefore focusing efforts on making it easier.
  • DevStack is another proof point of the effort going into make it easier to get OpenStack up and running, although the focus for DevStack is on single-host development environments (typically virtual themselves). Here’s one write-up on DevStack; here’s another one by Cody Bunch, and yet another one by the inimitable Brent Salisbury.

Operating Systems/Applications

  • If you’re interested in learning Puppet, there are a great many resources out there; in fact, I’ve already mentioned many of them in previous posts. I recently came across these Example42 Puppet Tutorials. I haven’t had the chance to review them myself yet, but it looks like they might be a useful resource as well.
  • Speaking of Puppet, the puppet-lint tool is very handy for ensuring that your Puppet manifest syntax is correct and follows the style guidelines. The tool has recently been updated to help fix issues as well. Read here for more information.


  • Greg Schulz (aka StorageIO) has a couple of VMware storage tips posts you might find useful reading. Part 1 is here, part 2 is here. Enjoy!
  • Amar Kapadia suggests that adding LTFS to Swift might create an offering that could give AWS Glacier a real run for the money.
  • Gluster interests me. Perhaps it shouldn’t, but it does. For example, the idea of hosting VMs on Gluster (similar to the setup described here) seems quite interesting, and the work being done to integrate KVM/QEMU with Gluster also looks promising. If I can ever get my home lab into the right shape, I’m going to do some testing with this. Anyone done anything with Gluster?
  • Erik Smith has a very informative write-up on why FIP snooping is important when using FCoE.
  • Via this post on ten useful OpenStack Swift features, I found this page on how to build the “Swift All in One,” a useful VM for learning all about Swift.


  • There’s no GUI for it, but it’s kind of cool that you can indeed create VM anti-affinity rules in Hyper-V using PowerShell. This is another example of how Hyper-V continues to get more competent. Ignore Microsoft and Hyper-V at your own risk…
  • Frank Denneman takes a quick look at using user-defined NetIOC network resource pools to isolate and protect IP-based storage traffic from within the guest (i.e., using NFS or iSCSI from within the guest OS, not through the VMkernel). Naturally, this technique could be used to “protect” or “enhance” other types of important traffic flows to/from your guest OS instances as well.
  • Andre Leibovici has a brief write-up on the PowerShell module for the Nicira Network Virtualization Platform (NVP). Interesting stuff…
  • This write-up by Falko Timme on using BoxGrinder to create virtual appliances for KVM was interesting. I might have to take a look at BoxGrinder and see what it’s all about.
  • In case you hadn’t heard, OVF 2.0 has been announced/released by the DMTF. Winston Bumpus of VMware’s Office of the CTO has more information in this post. I also found the OVF 2.0 frequently asked questions (FAQs) to be helpful. Of course, the real question is how long it will be before vendors add support for OVF 2.0, and how extensive that support actually is.

And that’s it for this time around! Feel free to share your thoughts, suggestions, clarifications, or corrections in the comments below. I encourage your feedback, and thanks for reading.

Tags: , , , , , , , , , , , , , , ,

About a year ago, I posted a look at my planned projects for 2012. Now, a year later, it’s time to review my progress (or lack thereof) and measure myself on how well I did (or didn’t) do on those projects.

First, let’s review the original project list:

  1. Learn to script in Perl.
  2. Learn to speak German.
  3. Become more familiar with Xen (and Open vSwitch and OpenStack).
  4. Pursue CCNP.

In my late June mid-year project update, I dropped the Perl scripting project simply because I had no practical applications driving the use of Perl. So, with that in mind, how did I do?

  1. Learn to speak German: Although I won’t say that I’ve actually learned to speak German, I have made some progress here. It’s not nearly the progress that I wanted to make, though—I wanted to be much farther along than I am. Grade: D

  2. Become more familiar with Xen, OVS, OpenStack: In retrospect, this project was overly broad, and therefore has mixed results. I ended up ditching Xen in favor of KVM, and made decent progress on that front. My work with Open vSwitch (OVS) was pretty good, probably the best out of the group. I still have quite a way to go with OpenStack, but I feel that time spent with KVM, OVS, and Libvirt helped build solid fundamentals for future progress. Grade: B

  3. Pursue CCNP: As I mentioned in the mid-year update, my goal was never to actually achieve CCNP this year, but simply to make progress. Regardless, my progress was abysmal. Grade: F

  4. Learn to work with Puppet: Not on my original project list, this is something that I nevertheless spent a fair amount of time pursuing. I’m not an expert (not anywhere close), but I feel like I did make reasonable progress. Grade: C

In summary: not very good!

So, what can I learn from these results?

  • First, the synergy of the projects really does make a difference. As readers noted in the comments on my original 2012 projects list, my choice of projects wasn’t synergistic, and this hampered efforts. Key takeaway: listen more closely to the advice of others!
  • Projects need to be more tightly defined. The “learn Xen, OVS, OpenStack” project was simply too broad, and encompassed too many different components. As a result, progress was mixed.
  • There are still some fundamental building blocks that I personally need to shore up. For example, my work with KVM, OVS, Libvirt, and Puppet exposed some gaps in my base Linux knowledge, and this is reflected in my progress.

In a (near-)future post, I’ll incorporate the progress on my 2012 projects and the key takeaways into my list of 2013 projects. Until then, I welcome any feedback or thoughts in the comments below.

Tags: , , , , , , ,

I like to spend time examining the areas where different groups of technologies intersect. Personally, I find this activity fascinating, and perhaps that’s the reason that I find myself pursing knowledge and experience in virtualization, networking, storage, and other areas simultaneously—it’s an effort to spend more time “on the border” between various technologies.

One border, in particular, is very interesting to me: the border between virtualization and networking. Time spent thinking about the border between networking and virtualization is what has generated posts like this one, this one, or this one. Because I’m not a networking expert (yet), most of the stuff I generate is junk, but at least it keeps me entertained—and it occasionally prods the Really Smart Guys (RSGs) to post something far more intelligent than anything I can create.

Anyway, I’ve been thinking more about some of these networking-virtualization chimeras, and I thought it might be interesting to talk about them, if for no other reason than to encourage the RSGs to correct me and help everyone understand a little better.

<aside>A chimera, by the way, was a mythological fire-breathing creature that was part lion, part goat, and part serpent; more generically, the word refers to any sort of organism that has two groups of genetically distinct cells. In layman’s terms, it’s something that is a mix of two other things.</aside>

Here are some of the networking-virtualization chimeras I’ve concocted:

  • FabricPath/TRILL on the hypervisor: See this blog post for more details. It turns out, at least at first glance, that this particular combination doesn’t seem to buy us much. The push for large L2 domains that seemed to fuel FabricPath and TRILL now seems to be abating in favor of network overlays and L3 routing.

  • MPLS-in-IP on the hypervisor: I also wrote about this strange concoction here. At first, I thought I was being clever and sidestepping some issues by bringing MPLS support into the hypervisor, but in thinking more about this I realize I’m wrong. Sure, we could encapsulate VM-to-VM traffic into MPLS, then encapsulate MPLS in UDP, but how is that any better than just encapsulating VM-to-VM traffic in VXLAN? It isn’t. (Not to mention that Ivan Pepelnjak set the record straight.)

  • LISP on the hypervisor: I thought this was a really good idea; by enabling LISP on the hypervisor and essentially making the hypervisor an ITR/ETR (see here for more LISP info), inter-DC vMotion becomes a snap. Want to use a completely routed access layer? No problem. Of course, that assumes all your WAN and data center equipment are LISP-capable and enabled/configured for LISP. I’m not the only one who thought this idea was cool, either. I’m sure there are additional problems/considerations of which I’m not aware, though—networking gurus, want to chime in and educate me on what I’m missing?

  • OTV on the hypervisor: This one isn’t really very interesting, as it bears great similarity to VXLAN (both OTV and VXLAN, to my knowledge, use very similar frame formats and encapsulation schemes). Is there something else here I’m missing?

  • VXLAN on physical switches: This one is interesting, even necessary according to some experts. Enabling VXLAN VTEP (VXLAN Tunnel End Point) termination on physical switches might also address some of the odd traffic patterns that would result from the use of VXLAN (see here for a simple example). Arista Networks demonstrated this functionality at VMworld 2012 in San Francisco, so this particular networking-virtualization mashup is probably closer to reality than any of the others.

  • OpenFlow on the hypervisor: Open vSwitch (OVS) already supports OpenFlow, so you might say that this mashup already exists. It’s not unreasonable to think Nicira might port OVS to VMware vSphere, which would bring an OpenFlow-compatible virtual switch to a much larger installed base. The missing piece is, of course, an OpenFlow controller. While an interesting mental exercise, I’m keenly interested to know what sort of real-world problems this might help solve, and would love to hear from any OpenFlow experts out there what they think.

  • Virtualizing physical switches: No, I’m not talking about running switch software on the hypervisor (think Nexus 1000V). Instead, I’m thinking more along the lines of FlowVisor, which in effect virtualizes a switch’s control plane so that multiple “slices” of a switch can be independently controlled by an external OpenFlow controller. If you’re familiar with NetApp, think of their “vfiler” construct, or think of the Virtual Device Contexts (VDCs) in a Nexus 7000. However, I’m thinking of something more device-independent than Nexus 7000 VDCs. As more and more switches move to x86 hardware, this seems like it might be something that could really take off. Multi-tenancy support (each “virtual switch instance” being independently managed), traffic isolation, QoS, VLAN isolation…lots of possibilities exist here.

Are there any other groupings that are worth exploring or discussing? Any other “you got your virtualization peanut butter in my networking chocolate” combinations that might help address some of the issues in data centers today? Feel free to speak up in the comments below. Courteous comments are invited and encouraged.

Tags: , , , , , ,

In early January, I posted a 2012 project list. To summarize, here are the four broad goals that I set for myself for 2012:

  1. Learn to script in Perl.
  2. Learn to speak German.
  3. Become very familiar with the Xen hypervisor.
  4. Pursue CCNP certification.

Now that we are halfway through the year, where do things stand? Here’s a quick update.

  1. My Perl skills are still really elementary. The biggest challenge I’ve found is that without some sort of task or process to try to automate, trying to write code in Perl is kind of “disconnected.” Yes, you can walk through the sample code and the exercises in the book, but to make it real you need a relevant challenge. I’ve been searching for some common tasks to try to automate, but haven’t had a great deal of success yet.

  2. My German is progressing, but ever so slowly. I’m right now about one-third of the way through the Rosetta Stone modules I have.

  3. Learning Xen is also progressing. I do have a Xen Cloud Platform (XCP) system up and running in my home office; I’ve been installing and re-installing it so as to get a better feel for the intricacies involved. It’s currently broken—time for another rebuild!

  4. I have created the study framework for key topics on the CCNP ROUTE exam and am now adding content to the study framework. I haven’t yet taken any exams, so I guess you could say I haven’t really made any measurable progress on this goal.

All in all, I haven’t made the progress that I would have liked to make, given the timeframe. Not to make excuses, but there are two factors that have affected me more significantly than I had anticipated: travel and my video training project with Train Signal. Of the limited progress that I have made, most of it was in Q1, before I started my Train Signal project. Since I started the video training series, my travel has also picked up, and I’ve found that it’s extremely difficult to work on the video series while I’m traveling. Thus, the video training series has taken a lot more of my time than I had originally expected, and has stalled progress on my other initiatives.

So is a “mid-course correction” necessary? I think that it is. Here are the changes I’m making:

  1. Learning Perl: At this point, I’m putting my Perl efforts on hold. For me, the biggest obstacle in learning to script in Perl was having relevant tasks that need to be scripted, and that’s where I came up short. I simply couldn’t find tasks that I wanted or needed to automate in Perl. A number of readers commented on my original article that my choice of projects wasn’t particularly synergistic, and perhaps that is what is being reflected here.

  2. Learning German: To help encourage me to work on my German more frequently, I’m going to change the OmniFocus actions so that I need to complete modules on a more regular basis. (As I’ve written before elsewhere, I use OmniFocus to help keep me on track with projects and responsibilities.)

  3. Learning Xen: I’m continuing with Xen. Progress has been slow, but there has been progress. Several readers suggested I focus on KVM instead, but for now I’m going to stick it out with Xen. The primary challenge here has been finding good sources of information. Don’t be too surprised to see some blog posts as I wrestle through certain areas; perhaps these posts can be helpful to others.

  4. Pursue CCNP: My pursuit of CCNP will continue. I already have study framework documents created, and I hope to turn up a router simulation environment (using GNS3 or similar) soon. From the beginning I never intended to actually attain CCNP by year end (that’s 3 exams), but simply to make measurable progress (1 exam passed).

That’s where things stand, halfway through 2012. Perhaps these goals are ambitious, but I do believe that it’s necessary to challenge ourselves, to never remain static and dormant—otherwise we risk becoming irrelevant in a fast-paced world of changing technologies.

I welcome any suggestions, thoughts, or criticisms (courteous and constructive, of course!) in the comments below.

Tags: , , , ,

Recent Cisco Product Launch

Cisco recently launched a number of new products and new versions of products aimed at showing Cisco’s dedication and innovation in network switching. You can get Cisco’s summary of the products here. I’ll start with the hardware-focused announcements.

Hardware-Focused Announcements

Cisco’s announcements on the hardware side primarily centered around new switching capabilities in the form of 40/100 Gigabit Ethernet (GE) cards for both the Nexus 7000 series platform as well as a 40 GE card for the Catalyst 6500 series. (The 40 GE card for the Catalyst 6500 series does require the newer Supervisor Engine 2T, however.)

<aside>Did you know that the 40/100 GE specification, 802.3ba, was ratified in June 2010? I didn’t realize it has been ratified that long.</aside>

For the Nexus 7000 series, Cisco unveiled two cards:

  • The M2-Series 6-port 40 Gigabit Ethernet Module with XL Option (how’s that for a mouthful?) sports, as the name suggests, 6 non-blocking 40 GE ports. With 16 of these blades in the 18-slot Nexus 7018, that provides up to 96 ports of 40 GE connectivity. The “XL Option” in the name of the card enables the card—in conjunction with the Scalable Feature License—to support more IPv4 routes (up to 1 million, depending on several factors), more IPv6 routes (up to 350,000, again depending on several factors), and more access control list (ACL) entries. These increased route and ACL limits could be useful in environments with multiple Virtual Routing and Forwarding (VRF) or multiple Virtual Device Context (VDC) instances. Based on the Cisco data sheet, it looks like this card can work with either Fabric-1 or Fabric-2 modules, although you’ll need Fabric-2 modules for the most throughput.
  • The M2-Series 2-port 100 Gigabit Ethernet Module with XL Option provides two non-blocking 100 GE ports. The “XL Option” functions in the same way as with the 6-port 40 GE card, and it does work with both Fabric-1 and Fabric-2 modules. Here’s the official Cisco data sheet.

Based on the data sheet, it looks like both of these cards will require version 6.1 of NX-OS, which—to my knowledge—is a brand-new release. (Version 6.0 of NX-OS for the Nexus 7000 series was released in late December.)

One interesting note about the 40 GE card is that it supports the use of a breakout cable that allows a single 40 GE port to support four 10 GE connections. This is true for both the Nexus 7000 and Catalyst 6500 cards, as far as I can tell. (Cisco references a FourX connector for the Catalyst 6500 card, but does not reference the same connector for the Nexus blade, instead simply mentioning a “breakout cable”.)

Cisco also introduced the Catalyst 4500-X, a fixed configuration 10 GE aggregation switch. They mentioned “40 GE readiness,” but it’s not clear when 40 GE uplinks will make their way to this particular platform.

Rounding out the hardware announcements was the Nexus 3064-X, a follow-up the low-latency Nexus 3000 series of switches introduced some time ago. This version offers lower power consumption and additional reductions in switching latency. The specific switching latency reductions were not specified anywhere that I found.

Software-Focused Announcements

The software-focused announcements were primarily centered around the Nexus 1000V/1010 and a new feature called Easy Virtual Network (EVN).

  • A new version of the Nexus 1000V was announced that supports VXLAN (Virtual Extensible LAN). I’ve discussed VXLAN extensively (see here), as have others in the industry, so I won’t rehash that again.
  • Also of note is that the Cisco Virtual Security Gateway (VSG) will offer zone-based firewalling services to VMs on VXLAN segments.
  • The Nexus 1010-X is a “beefed up” version of the Nexus 1010 (yes, I know this is technically a hardware product), intended to support more virtual networks. See here for more information on the differences between the Nexus 1010 and the Nexus 1010-X.
  • Easy Virtual Network (EVN) is the most interesting of the software-based announcements (to me, at least). Cisco touts EVN as “fully compatible with established standards, including Multiprotocol Labl Switching (MPLS), MPLS VPN over IP (multipoint generic routing encapsulation, also known as mGRE), Multi-Virtual Route Forwarding (also known as VRF-Lite), and others.” (More information available here.) However, it appears that EVN doesn’t actually use any of these mechanisms. In fact, it’s unclear to me exactly what mechanism EVN does use. The data sheet (linked above) mentions the use of a VNET tag, and indicates that the VNET tag is stored in the 802.1q VLAN ID field. This looks like Cisco is creating yet another Layer 3 VPN solution, instead of leveraging existing solutions. Why not add VXLAN support to the hardware switches instead of creating EVN? Maybe I’m completely missing the mark here…feel free to correct me (courteously, of course!).

Why This Matters to You

All these product announcements and new product versions are pretty cool, but you might be wondering, “Why does this matter to me?” Good question. Here are my thoughts:

  • The 40 GE and 100 GE cards are important in data centers where we are seeing increased deployment of 10 GE to the servers. Once motherboard manufacturers start using 10 GE LoM (LAN on Motherboard) ports, the deployment of 10 GE to servers in data centers will naturally increase. Deploying 40 GE and 100 GE uplinks in 10 GE-heavy environments makes sense (depending on the details, naturally).
  • You already knew the VXLAN-capable 1000V was on the way (this was alluded to in the original VXLAN announcement at VMworld 2011), so no real surprises there.
  • The Nexus 1010-X simply allows you to run more VSBs (virtual service blades), such as the Nexus 1000V VSM (virtual supervisor module). If you’re a Nexus 1000V customer, you might have already started investigating the use of the Nexus 1010 to host the VSMs. Large customers (service providers, perhaps?) had a need for more VSBs on a single Nexus 1010, hence the Nexus 1010-X.
  • EVN…well, I’m stuck on EVN. I certainly see the need for simpler separation of traffic, but again I must ask why Cisco appears to be creating something new instead of re-using protocols that are perfectly suited for this purpose? Maybe I need a networking expert to explain it to me. (When I understand it, I’ll post an explanation that everyone else can understand. Fair?)

As always, feel free to post any clarifications or corrections in the comments below. I’d love to hear from any networking gurus on any of the points that I raised in this article.

Tags: , ,

Welcome to Technology Short Take #19, the first Technology Short Take for 2012. Here’s this year’s first collection of links, articles, and thoughts regarding virtualization, storage, networking, and other data center technology-related topics. I hope you find something useful!


  • While configuration limits aren’t the most exciting reading, they are important from time to time. Here’s some configuration limits for the UCS 6100 and 6200 series.
  • Understanding the differences—both positive and negative—between the various approaches to solving a particular challenge is a key skill. That’s why I like this article on HP Flex-10 versus NIOC for VDI. The author (Dwayne) weighs the pros and cons of both approaches in helping to shape network traffic for VDI deployments using 10Gb Ethernet.
  • It would appear that my recent VXLAN and OTV connectivity posts (incorrect VXLAN post here, corrected VXLAN post here, and OTV/VXLAN post here) sparked a discussion about whether we really need to concern ourselves with traffic trombones. On one side we have Brad Hedlund speculating that the network should be treated like a large virtual I/O fabric; on the other side we have Greg Ferro countering that we do need to be concerned about the topology of the network. I can see both sides of the argument, but at this stage of the game, I’m inclined to agree more with Greg. In the future (it’s unclear how far in the future) I think that Brad’s points will be more valid, but not right now.
  • This post by Ivan Pepelnjak on VXLAN, IP multicast, OpenFlow, and control planes highlights some of the current limitations with VXLAN and thus reinforces why I think that Brad’s arguments are a bit ahead of their time.
  • A few folks had some write-ups on Embrane Heleos: Greg Ferro, Jason Edelman, Brad Hedlund, Brad Casemore, and Ivan Pepelnjak. My question (and this is spurred in part by some comments by Brad Casemore): is this another Cisco spin-in move?

Servers/Operating Systems/Applications



And that it’s for this time around; as always, I hope you’ve found something useful here. Courteous comments are always welcome; feel free to speak up below.

Tags: , , , , , , , , , , ,

Rather than posting some sort of “2011 in review” article where I talk about how many visitors the site had or how many RSS subscribers there are, I thought I’d instead focus on the upcoming year and some of the projects in which I’ll be involved. By describing some of the projects that I’m undertaking this year in 2012, that gives you—the readers—a rough idea of some of the types of content that will likely appear in the coming year.

Here are some of my 2012 projects (some of these I’ve already tweeted about):

  1. I’m going to learn to script in Perl. Many people have asked why Perl and why not Python or Ruby or something else. Honestly, I don’t have a really good answer for you. I tried (unsuccessfully) to teach myself Perl a couple of years ago, so I still have the O’Reilly Learning Perl book. Rather than spending money to learn some other scripting language, it seemed reasonable to revisit Perl again and just leverage the resources I already have. You might see a few Perl-related posts here and there as I work through Learning Perl, but I’ll try not to bore you with elementary stuff.

  2. I’m going to learn German. Same scenario here—many people have asked why German and why not Spanish or French. I do have an answer this time: I seem to be spending a fair amount of time in Vienna, so German seemed to make sense. I also have a series of customer meetings planned in Germany in the first quarter of this year. Plus, German is completely new and different than anything I’ve learned before, and I wanted to challenge myself to learn and think in new ways. It’s unlikely that this will find its way into any blog posts, but you never know…

  3. I’m going to become much more familiar with the Xen hypervisor. I haven’t yet decided if I’ll focus strictly on the open source version of Xen or Citrix XenServer; I’m open to suggestions there. No, this doesn’t mean that I’m abandoning VMware or anything like that; I just want to expand my knowledge. You can’t simply discount Xen; after all, Amazon EC2 is built on Xen. Along with this dive into Xen, I’ll also be looking very closely at Open vSwitch and OpenStack. I’d expect that a great deal of this education will eventually end up in various blog posts here.

  4. I’m going to pursue my CCNP. I “re-achieved” CCNA last year, and this year I’m pursuing my CCNP. As with Xen, I’m confident that the learning curve required to move closer to (or even achieve) CCNP will result in a number of related blog posts on various networking technologies or concepts.

I do have a few other projects planned for this upcoming year, but I’m not quite ready to discuss those publicly yet. At least one of these other projects will be something new that I haven’t done before. Stretching myself and my skills/experience in new directions is a bit of a theme this year.

If you have any tips/tricks/advice to share on any of these upcoming projects, or if there are specific things related to these projects that you’d like to see blogged about here, please let me know in the comments. Thanks, and I hope that 2012 is going to be as exciting for you as it will be for me!

Tags: , , , ,

Since attending Cisco Live 2011 in Las Vegas earlier this year (see my summary list of blog posts), my mind has been swirling with ideas about how various technologies might work (or not work) together. In the first of a series of “Thinking Out Loud” posts in which I’ll attempt to explore—and spur discussion regarding—how certain networking technologies might integrate with virtualization technologies, I’d like to explore this question: “What if vSphere had FabricPath support?”

<aside>By the way, if you’re not familiar with the “Thinking Out Loud” posts, the key point is simply to stimulate discussion and encourage knowledge sharing.</aside>

If you’re unfamiliar with FabricPath, you might find my session blog of a FabricPath session at Cisco Live helpful to bring you up to speed. Also, while this post focuses on potential FabricPath integration, I think many of the same points would apply to TRILL and potentially SPB.

When I first started considering this idea, a few things came to mind:

  1. The first thing that came to mind was that the quickest, most “natural” way of bringing FabricPath support to vSphere would be via the Nexus 1000V. While I’m sure that the NX-OS code base between the Nexus 1000V and the Nexus 7000 (where FabricPath is available) is dramatically different, it’s still closer than trying to add FabricPath support directly to vSphere.
  2. The second thought was that, intuitively, FabricPath would bring value to vSphere. After all, FabricPath is about eliminating STP and increasing the effective bandwidth of your network, right?

It’s this second thought I’d like to drill down on in this post.

When I first started considering what benefits, if any, FabricPath support in vSphere might bring, I thought that FabricPath would really bring some value. After all, what does FabricPath bring to the datacenter network? Multiple L2 uplinks between switches, low latency any-to-any switching, and equal cost multipathing, to name a few things. Surely these would be of great benefit to vSphere, wouldn’t they? That’s what I thought…until I created a diagram.

Consider this diagram of a “regular” vSphere network topology:

Non-FP-aware network

This is fairly straightforward stuff, found in many data centers today. What would bringing FabricPath into this network, all the way down to the ESXi hosts, give us? Consider this diagram:

FP-aware network 1

We’ve replaced the upstream switches with FabricPath-aware switches and put in our fictional FP-aware Nexus 1000V, but what does it change? From what I can tell, not much changes. Consider these points:

  • In both cases, each Nexus 1000V has two uplinks and has the ability to actively use both uplinks. The only difference the presence of FabricPath would make, as far as I can tell, is in the selection of which uplink to use.
  • In both cases, host-to-host (or VM-to-VM) traffic still has to travel through the upstream switches. The presence of FabricPath awareness on the vSphere hosts doesn’t change this.

That second point, in my mind, deserves some additional thought. FabricPath enables multiple, active L2 links between switches, but in both of the topologies shown above the traffic has to travel through the upstream switches. In fact, the only way to change the travel patterns would be to add extra host-to-host links, like this:

FP-aware network 1

OK, if these extra host-to-host links were present, then the presence of FabricPath at the ESXi host layer might make a difference. VM-to-VM traffic could then just hop across without going through the upstream switches. All is good, right?

Not so fast! I’m not a networking guru, but I see some potential problems with this:

  • This approach isn’t scalable. You’d have to have host-to-host links between every ESXi host, which means for N hosts you’ll need (N-1) uplinks. That would limit the scalability of “fabric-connected” vSphere hosts since there just isn’t enough room to add that many networking ports (nor it is very cost effective).
  • Does adding host-to-host links fundamentally change the nature of the virtual switch? The way virtual switches (or edge virtual bridges, if you prefer) operate today is predicated upon certain assumptions; would these assumptions need to change? I’m not sure about this point yet; I’m still processing the possibilities.
  • What does this topology really buy an organization? Most data center switches have pretty low L2 switching latencies (and getting lower all the time). Would a host-to-host link really get us any better performance? I somehow don’t think so.

In the end, it seems to me that there is (currently) very little value in bringing FabricPath (or TRILL or SPB) support all the way down to the virtualization hosts. However, I’d love to hear what you think. Am I wrong in some of my conclusions? My understanding of FabricPath (and related technologies) is still evolving, so please let me know if something I’ve said is incorrect. Speak up in the comments!

Tags: , , , ,

« Older entries