VXLAN

You are currently browsing articles tagged VXLAN.

Welcome to Technology Short Take #44, the latest in my irregularly-published series of articles, links, ideas, and thoughts about various data center-related technologies. Enjoy!

Networking

  • One of the original problems with the VXLAN IETF specification was that it (deliberately) didn’t include any control plane information; as a result, the process of mapping MAC addresses to VTEPs (VXLAN Tunnel Endpoints) wasn’t defined, and the early implementations relied on multicast to handle this issue. To help resolve this issue, Cumulus Networks (and possibly Metacloud, I’m not sure of their involvement yet) has release an open source project called vxfld. As described in this Metacloud blog post, vxfld is designed to “handle VXLAN traffic from any operationg system or hardware platform that adheres to the IETF Internet-Draft for VXLAN”.
  • Nir Yechiel recently posted part 1 of a discussion on the need for network overlays. This first post is more of a discussion of why VLANs and VLAN-based derivatives aren’t sufficient, and why we should be looking to routing (layer 3) constructs instead. I’m looking forward to part 2 of the series.
  • One ongoing discussion in the network industry these days—or so it seems—is the discussion about the interaction between network overlays and the underlying transport network. Some argue that tight integration is required; others point to streaming video services and VoIP running across the Internet and insist that no integration or interaction is needed. In this post, Scott Jensen argues in favor of the former—that SDN solutions shouldn’t just manage network overlays, but should also manage the configuration of the physical transport network as well. I’d love to hear from more networking pros (please disclose company affiliations) about their thoughts on this matter.
  • I like the distinction made here between network automation and SDN.
  • Need to get a better grasp on OpenFlow? Check out OpenFlow basics and OpenFlow deep-dive.
  • Here’s a write-up on connecting Docker containers using VXLAN. I think there’s a great deal of promise for OVS in containerized environments, but what’s needed is better/tighter integration between OVS and container solutions like Docker.

Servers/Hardware

  • Is Intel having second thoughts about software-defined infrastructure? That’s the core question in this blog post, which explores the future of Intel in a software-defined world and the increasing interest in non-x86 platforms like ARM.
  • On the flip side, proponents who claim that platforms like ARM and others are necessary in order to move forward with SDN and NFV initiatives should probably read this article on 80 Gbps performance from an off-the-shelf x86 server. Impressive.

Security

  • It’s nice to see that work on OpenStack Barbican is progressing nicely; see this article for a quick overview of the project and an update on the status.

Cloud Computing/Cloud Management

  • SDN Central has a nice write-up on the need for open efforts in the policy space, which includes the Congress project.
  • The use of public cloud offerings as disaster recovery targets is on the rise; note this article from Microsoft on how to migrate on-premises workloads to Azure using Azure Site Recovery. VMware has a similar offering via the VMware vCloud Hybrid Service recovery-as-a-service offering.
  • The folks at eNovance have a write-up on multi-tenant Docker with OpenStack Heat. It’s an interesting write-up, but not for the faint of heart—to make their example work, you’ll need the latest builds of Heat and the Docker plugin (it doesn’t work with the stable branch of Heat).
  • Preston Bannister took a look at cloud application backup in OpenStack. His observations are, I think, rational and fair, and I’m glad to see someone paying attention to this topic (which, thus far, I think has been somewhat ignored).
  • Interested in Docker and Kubernetes on Azure? See here and here for more details.
  • This article takes a look at Heat-Translator, an effort designed to provide some interoperability between TOSCA and OpenStack HOT documents for application deployment and orchestration. The portability of orchestration resources is one of several aspects you’ll want to examine as you progress down the route of fully embracing a cloud computing operational model.

Operating Systems/Applications

  • Looks like we have another convert to Markdown—Anthony Burke recently talked about how he uses Markdown. Regular readers of this site know that I do almost all of my content generation using MultiMarkdown (a variation of Markdown with some expanded syntax options). Here’s a post I recently published on some useful Markdown tools for OS X.
  • Good to see that Ivan Pepelnjak thinks infrastructure as code makes sense. I guess that means the time I’ve spent with Puppet (you can browse Puppet-related posts here) wasn’t a waste.
  • I don’t know if I’ve mentioned this before (sorry if that’s the case), but I’m liking this “NIX4NetEng” series going on over at Nick Buraglio’s site (part 1, part 2, and part 3).
  • Mike Foley has a blog post on how to go from zero to Windows domain controller in only 4 reboots. Handy.

Storage

Virtualization

  • Running Hyper-V with Linux VMs? Ben Armstrong details what versions of Linux support the various Hyper-V features in this post.
  • Here’s a quick write-up on running VMs with VirtualBox 4.3 on a headless Ubuntu 14.04 LTS server.
  • Nested OS X guest on top of nested ESXi on top of VMware Fusion? Must be something William Lam’s tried. Go have a look at his write-up.
  • Here’s a quick update on Nova-Docker, the effort in OpenStack to allow users to deploy Docker containers via Nova. I’m not yet convinced that treating Docker as a hypervisor in Nova is the right path, but we’ll see how things develop.
  • This post is a nice write-up on the different ways to connect a Docker container to a local network.
  • Weren’t able to attend VMworld US in San Francisco last week? No worries. If you have access to the recorded VMworld sessions, check out Jason Boche’s list of the top 10 sessions for a priority list of what recordings to check out. Or need a recap of the week? See here (one of many recap posts, I’m sure).

That’s it this time around; hopefully I was able to include something useful for you. As always, all courteous comments are welcome, so feel free to speak up in the comments. In particular, if there is a technology area that I’m not covering (or not covering well), please let me know—and suggestions for more content sources are certainly welcome!

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #33, the latest in my irregularly-published series of articles discussing various data center technology-related links, articles, rants, thoughts, and questions. I hope that you find something useful here. Enjoy!

Networking

  • Tom Nolle asks the question, “Is virtualization reality even more elusive than virtual reality?” It’s a good read; the key thing that I took away from it was that SDN, NFV, and related efforts are great, but what we really need is something that can pull all these together in a way that customers (and providers) reap the benefits.
  • What happens when multiple VXLAN logical networks are mapped to the same multicast group? Venky explains it in this post. Venky also has a great write-up on how the VTEP (VXLAN Tunnel End Point) learns and creates the forwarding table.
  • This post by Ranga Maddipudi shows you how to use App Firewall in conjunction with VXLAN logical networks.
  • Jason Edelman is on a roll with a couple of great blog posts. First up, Jason goes off on a rant about network virtualization, briefly hitting topics like the relationship between overlays and hardware, the role of hardware in network virtualization, the changing roles of data center professionals, and whether overlays are the next logical step in the evolution of the network. I particularly enjoyed the snippet from the post by Bill Koss. Next, Jason dives a bit deeper on the relationship between network overlays and hardware, and shares his thoughts on where it does—and doesn’t—make sense to have hardware terminating overlay tunnels.
  • Another post by Tom Nolle explores the relationship—complicated at times—between SDN, NFV, and the cloud. Given that we define the cloud (sorry to steal your phrase, Joe) as elastic, pooled resources with self-service functionality and ubiquitous access, I can see where Tom states that to discuss SDN or NFV without discussing cloud is silly. On the flip side, though, I have to believe that it’s possible for organizations to make a gradual shift in their computing architectures and processes, so one almost has to discuss these various components individually, because to tie them all together makes it almost impossible. Thoughts?
  • If you haven’t already introduced yourself to VXLAN (one of several draft protocols used as an overlay protocol), Cisco Inferno has a reasonable write-up.
  • I know Steve Jin, and he’s a really smart guy. I must disagree with some of his statements regarding what software-defined networking is and is not and where it fits, written back in April. I talked before about the difference between network virtualization and SDN, so no need to mention that again. Also, the two key flaws that Steve identifies—single point of failure and scalability—aren’t flaws with SDN/network virtualization, but rather flaws in an implementation of said technologies, IMHO.

Servers/Hardware

  • Correction from the last Technology Short Take—I incorrectly stated that the HP Moonshot offerings were ARM-based, and therefore wouldn’t support vSphere. I was wrong. The servers (right now, at least) are running Intel Atom S1260 CPUs, which are x86-based and do offer features like Intel VT-x. Thanks to all who pointed this out, and my apologies for the error!
  • I missed this on the #vBrownBag series: designing HP Virtual Connect for vSphere 5.x.

Security

Cloud Computing/Cloud Management

  • Hyper-V as hypervisor with OpenStack Compute? Sure, see here.
  • Cody Bunch, who has been focusing quite a bit on OpenStack recently, has a nice write-up on using Razor and Chef to automate an OpenStack build. Part 1 is here; part 2 is here. Good stuff—keep it up, Cody!
  • I’ve mentioned in some of my OpenStack presentations (see SpeakerDeck or Slideshare) that a great place to start if you’re just getting started is DevStack. Here, Brent Salisbury has a nice write-up on using DevStack to install OpenStack Grizzly.

Operating Systems/Applications

  • Boxen, a tool created by GitHub to manage their OS X Mountain Lion laptops for developers, looks interesting. Might be a useful tool for other environments, too.
  • If you use TextMate2 (I switched to BBEdit a little while ago after being a long-time TextMate user), you might enjoy this quick post by Colin McNamara on Puppet syntax highlighting using TextMate2.

Storage

  • Anyone have more information on Jeda Networks? They’ve been mentioned a couple of times on GigaOm (here and here), but I haven’t seen anything concrete yet. Hey, Stephen Foskett, if you’re reading: get Jeda Networks to the next Tech Field Day.
  • Tim Patterson shares some code from Luc Dekens that helps check VMFS version and block sizes using PowerCLI. This could come in quite handy in making sure you know how your datastores are configured, especially if you are in the midst of a migration or have inherited an environment from someone else.

Virtualization

  • Interested in using SAML and Horizon Workspace with vCloud Director? Tom Fojta shows you how.
  • If you aren’t using vSphere Host Profiles, this write-up on the VMware SMB blog might convince you why you should and show you how to get started.
  • Michael Webster tackles the question: is now the best time to upgrade to vSphere 5.1? Read the full post to see what Michael has to say about it.
  • Duncan points out an easy error to make when working with vSphere HA heartbeat datastores in this post. Key takeaway: sometimes the fix is a lot simpler than we might think at first. (I know I’m guilty of making things more complicated than they need to be at times. Aren’t we all?)
  • Jon Benedict (aka “Captain KVM”) shares a script he wrote to help provide high availability for RHEV-M.
  • Chris Wahl has a nice write-up on using log shipping to protect your vCenter database. It’s a bit over a year old (surprised I missed it until now), and—as Chris points out—log shipping doesn’t protect the database (primary and secondary copies) against corruption. However, it’s better than nothing (which I suspect it what far too many people are using).

Other

  • If you aspire to be a writer—whether that be a blogger, author, journalist, or other—you might find this article on using the DASH method for writing to be helpful. The six tips at the end of the article are especially helpful, I think.

Time to wrap this up for now; the rest will have to wait until the next Technology Short Take. Until then, feel free to share your thoughts, questions, or rants in the comments below. Courteous comments are always welcome!

Tags: , , , , , , , , , , , , , ,

Welcome to Technology Short Take #32, the latest installment in my irregularly-published series of link collections, thoughts, rants, raves, and miscellaneous information. I try to keep the information linked to data center technologies like networking, storage, virtualization, and the like, but occasionally other items slip through. I hope you find something useful.

Networking

  • Ranga Maddipudi (@vCloudNetSec on Twitter) has put together two blog posts on vCloud Networking and Security’s App Firewall (part 1 and part 2). These two posts are detailed, hands-on, step-by-step guides to using the vCNS App firewall—good stuff if you aren’t familiar with the product or haven’t had the opportunity to really use it.
  • The sentiment behind this post isn’t unique to networking (or networking engineers), but that was the original audience so I’m including it in this section. Nick Buraglio climbs on his SDN soapbox to tell networking professionals that changes in the technology field are part of life—but then provides some specific examples of how this has happened in the past. I particularly appreciated the latter part, as it helps people relate to the fact that they have undergone notable technology transitions in the past but probably just don’t realize it. As I said, this doesn’t just apply to networking folks, but to everyone in IT. Good post, Nick.
  • Some good advice here on scaling/sizing VXLAN in VMware deployments (as well as some useful background information to help explain the advice).
  • Jason Edelman goes on a thought journey connecting some dots around network APIs, abstractions, and consumption models. I’ll let you read his post for all the details, but I do agree that it is important for the networking industry to converge on a consistent set of abstractions. Jason and I disagree that OpenStack Networking (formerly Quantum) should be the basis here; he says it shouldn’t be (not well-known in the enterprise), I say it should be (already represents work created collaboratively by multiple vendors and allows for different back-end implementations).
  • Need a reasonable introduction to OpenFlow? This post gives a good introduction to OpenFlow, and the author takes care to define OpenFlow as accurately and precisely as possible.
  • SDN, NFV—what’s the difference? This post does a reasonable job of explaining the differences (and the relationship) between SDN and NFV.

Servers/Hardware

  • Chris Wahl provides a quick overview of the HP Moonshot servers, HP’s new ARM-based offerings. I think that Chris may have accidentally overlooked the fact that these servers are not x86-based; therefore, a hypervisor such as vSphere is not supported. Linux distributions that offer ARM support, though—like Ubuntu, RHEL, and SuSE—are supported, however. The target market for this is massively parallel workloads that will benefit from having many different cores available. It will be interesting to see how the support of a “Tier 1″ hardware vendor like HP affects the adoption of ARM in the enterprise.

Security

  • Ivan Pepelnjak talks about a demonstration of an attack based on VM BPDU spoofing. In vSphere 5.1, VMware addressed this potential issue with a feature called BPDU Filter. Check out how to configure BPDU Filter here.

Cloud Computing/Cloud Management

  • Check out this post for some vCloud Director and RHEL 6.x interoperability issues.
  • Nick Hardiman has a good write-up on the anatomy of an AWS CloudFormation template.
  • If you missed the OpenStack Summit in Portland, Cody Bunch has a reasonable collection of Summit summary posts here (as well as materials for his hands-on workshops here). I was also there, and I have some session live blogs available for your pleasure.
  • We’ve probably all heard the “pets vs. cattle” argument applied to virtual machines in a cloud computing environment, but Josh McKenty of Piston Cloud Computing asks whether it is now time to apply that thinking to the physical hosts as well. Considering that the IT industry still seems to be struggling with applying this line of thinking to virtual systems, I suspect it might be a while before it applies to physical servers. However, Josh’s arguments are valid, and definitely worth considering.
  • I have to give Rob Hirschfeld some credit for—as a member of the OpenStack Board—acknowledging that, in his words, “we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.” Open, honest, transparent dealings and self-assessments are critically important for a project like OpenStack to succeed, so kudos to Rob for posting a list of some of the challenges facing the project as adoption, visibility, and development accelerate.

Operating Systems/Applications

Nothing this time around, but I’ll stay alert for items to add next time.

Storage

  • Nigel Poulton tackles the question of whether ASIC (application-specific integrated circuit) use in storage arrays elongates the engineering cycles needed to add new features. This “double edged sword” argument is present in networking as well, but this is the first time I can recall seeing the question asked about modern storage arrays. While Nigel’s article specifically refers to the 3PAR ASIC and its relationship to “flash as cache” functionality, the broader question still stands: at what point do the drawbacks of ASICs begin to outweight the benefits?
  • Quite some time ago I pointed readers to a post about Target Driven Zoning from Erik Smith at EMC. Erik recently announced that TDZ works after a successful test run in a lab. Awesome—here’s hoping the vendors involved will push this into the market.
  • Using iSER (iSCSI Extensions for RDMA) to accelerate iSCSI traffic seems to offer some pretty promising storage improvements (see this article), but I can’t help but feel like this is a really complex solution that may not offer a great deal of value moving forward. Is it just me?

Virtualization

  • Kevin Barrass has a blog post on the VMware Community site that shows you how to create VXLAN segments and then use Wireshark to decode and view the VXLAN traffic, all using VMware Workstation.
  • Andre Leibovici explains how Horizon View Multi-VLAN works and how to configure it.
  • Looking for a good list of virtualization and cloud podcasts? Look no further.
  • Need Visio stencils for VMware? Look no further.
  • It doesn’t look like it has changed much from previous versions, but nevertheless some people might find it useful: a “how to” on virtualization with KVM on CentOS 6.4.
  • Captain KVM (cute name, a take-off of Captain Caveman for those who didn’t catch it) has a couple of posts on maximizing 10Gb Ethernet on KVM and RHEV (the KVM post is here, the RHEV post is here). I’m not sure that I agree with his description of LACP bonds (“2 10GbE links become a single 20GbE link”), since any given flow in a LACP configuration can still only use 1 link out of the bond. It’s more accurate to say that aggregate bandwidth increases, but that’s a relatively minor nit overall.
  • Ben Armstrong has a write-up on how to install Hyper-V’s integration components when the VM is offline.
  • What are the differences between QuickPrep and Sysprep? Jason Boche’s got you covered.

I suppose that’s enough information for now. As always, courteous comments are welcome, so feel free to add your thoughts in the comments below. Thanks for reading!

Tags: , , , , , , , , , , , ,

Dan Wendlandt said something today in the NVP deep dive session (liveblog of the session here) that really crystallized something for me. I thought perhaps I might be the only one that was seeing a trend, but Dan’s comment leads me to believe there are others seeing this trend as well. Here’s the quote, taken from my liveblog of the session:

It is important to note, as Dan does, that a tunneling protocol alone is not network virtualization.

There’s a lot of buzz in the industry about network virtualization and network overlays, and often those terms are used interchangeably. People talk about the need for multi-tenancy and address space isolation, point to network overlays like VXLAN, NVGRE, and STT as the answer to all our problems, and in so doing they (inadvertently) conflate network overlays with network virtualization. Network overlays and network virtualization aren’t the same thing, and people that use them interchangeably probably don’t fully understand what’s involved.

<aside>By the way, if you’re not familiar with the idea of network overlays, I’d recommend reading this, this, and this to get you started. There’s plenty more out there, but those three articles will at least prime the pump, I think.</aside>

Network overlays are great for address space isolation (for example, isolating duplicate MAC addresses, duplicate IP addresses, or duplicate VLAN IDs). As such, network overlays can be an important part of network virtualization. You need more than a network overlay, though, to have network virtualization; you also need virtualized network services (like NAT, firewalls, ACLs, QoS, routing, and the like) and you need a control plane (else how would you coordinate the various pieces within the network virtualization solution?). The overlay protocol is just one piece of the puzzle, so using “network overlay” interchangeably with “network virtualization” is incorrect.

As always, I welcome the input of those more educated/knowledgeable than I am. If you’re a networking expert (or a virtual networking expert), feel free to speak up in the comments and correct my misunderstanding or misconceptions (please disclose vendor affiliations). I’m always open to deepening my knowledge—and helping others with their understanding and knowledge along the way.

Tags: , ,

Welcome to Technology Short Take #28, the first Technology Short Take for 2013. As always, I hope that you find something useful or informative here. Enjoy!

Networking

  • Ivan Pepelnjak recently wrote a piece titled “Edge and Core OpenFlow (and why MPLS is not NAT)”. It’s an informative piece—Ivan’s stuff is always informative—but what really drew my attention was his mention of a paper by Martin Casado, Teemu Koponen, and others that calls for a combination of MPLS and OpenFlow (and an evolution of OpenFlow into “edge” and “core” versions) to build next-generation networks. I’ve downloaded the paper and intend to review it in more detail. I’d love to hear from any networking experts who’ve read the paper—what are your thoughts?
  • Speaking of Ivan…it also appears that he’s quite pleased with Microsoft’s implementation of NVGRE in Hyper-V. Sounds like some of the other vendors need to get on the ball.
  • Here’s a nice explanation of CloudStack’s physical networking architecture.
  • The first fruits of Brad Hedlund’s decision to join VMware/Nicira have shown up in this joint article by Brad, Bruce Davie, and Martin Casado describing the role of network virutalization in the software-defined data center. (It doesn’t matter how many times I say or write “software-defined data center,” it still feels like a marketing term.) This post is fairly high-level and abstract; I’m looking forward to seeing more detailed and in-depth posts in the future.
  • Art Fewell speculates that the networking industry has “lost our way” and become a “big bag of protocols” in this article. I do agree with one of the final conclusions that Fewell makes in his article: that SDN (a poorly-defined and often over-used term) is the methodology of cloud computing applied to networking. Therefore, SDN is cloud networking. That, in my humble opinion, is a more holistic and useful way of looking at SDN.
  • It appears that the vCloud Connector posts (here and here) that (apparently) incorrectly identify VXLAN as a component/prerequisite of vCloud Connector have yet to be corrected. (Hat tip to Kenneth Hui at VCE.)

Servers/Hardware

Nothing this time around, but I’ll watch for content to include in future posts.

Security

  • Here’s a link to a brief (too brief, in my opinion, but perhaps I’m just being overly critical) post on KVM virtualization security, authored by Dell TechCenter. It provides some good information on securing the libvirt communication channel.

Cloud Computing/Cloud Management

  • Long-time VMware users probably remember Mike DiPetrillo, whose website has now, unfortunately, gone offline. I mention this because I’ve had this article on RabbitMQ AMQP with vCloud Director sitting in my list of “articles to write about” for a while, but some of the images were missing and I couldn’t find a link for the article. I finally found a link to a reprinted version of the article on DZone Enterprise Integration. Perhaps the article will be of some use to someone.
  • Sam Johnston talks about reliability in the cloud with a discussion on the merits of “reliable software” (software designed for failure) vs. “unreliable software” (more traditional software not designed for failure). It’s a good article, but I found the discussion between Sam and Massimo (of VMware) as equally useful.

Operating Systems/Applications

Storage

  • Want some good details on the space-efficient sparse disk format in vSphere 5.1? Andre Leibovici has you covered right here.
  • Read this article for good information from Andre on a potential timeout issue with recomposing desktops and using the View Storage Accelerator (aka context-based read cache, CRBC).
  • Apparently Cormac Hogan, aka @VMwareStorage on Twitter, hasn’t gotten the memo that “best practices” is now outlawed. He should have named this series on NFS with vSphere “NFS Recommended Practices”, but even misnamed as they are, the posts still have useful information. Check out part 1, part 2, and part 3.
  • If you’d like to get a feel for how VMware sees the future of flash storage in vSphere environments, read this.

Virtualization

  • This is a slightly older post, but informative and useful nevertheless. Cormac posted an article on VAAI offloads and KAVG latency when observed in esxtop. The summary of the article is that the commands esxtop is tracking are internal to the ESXi kernel only; therefore, abnormal KAVG values do not represent any sort of problem. (Note there’s also an associated VMware KB article.)
  • More good information from Cormac here on the use of the SunRPC.MaxConnPerIP advanced setting and its impact on NFS mounts and NFS connections.
  • Another slightly older article (from September 2012) is this one from Frank Denneman on how vSphere 5.1 handles parallel Storage vMotion operations.
  • A fellow IT pro contacted me on Twitter to see if I had any idea why some shares on his Windows Server VM weren’t working. As it turns out, the problem is related to hotplug functionality; the OS sees the second drive as “removable” due to hotplug functionality, and therefore shares don’t work. The problem is outlined in a bit more detail here.
  • William Lam outlines how to use new tagging functionality in esxcli in vSphere 5.1 for more comprehensive scripted configurations. The new tagging functionality—if I’m reading William’s write-up correctly—means that you can configure VMkernel interfaces for any of the supported traffic types via esxcli. Neat.
  • Chris Wahl has a nice write-up on the behavior of Network I/O Control with multi-NIC vMotion traffic. It was pointed out in the comments that the behavior Chris describes is documented, but the write-up is still handy, and an important factor to keep in mind in your designs.

I suppose I should end it here, before this “short take” turns into a “long take”! In any case, courteous comments are always welcome, so if you have additional information, clarifications, or corrections to share regarding any of the articles or links in this post, feel free to speak up below.

Tags: , , , , , , , , , , , , ,

I like to spend time examining the areas where different groups of technologies intersect. Personally, I find this activity fascinating, and perhaps that’s the reason that I find myself pursing knowledge and experience in virtualization, networking, storage, and other areas simultaneously—it’s an effort to spend more time “on the border” between various technologies.

One border, in particular, is very interesting to me: the border between virtualization and networking. Time spent thinking about the border between networking and virtualization is what has generated posts like this one, this one, or this one. Because I’m not a networking expert (yet), most of the stuff I generate is junk, but at least it keeps me entertained—and it occasionally prods the Really Smart Guys (RSGs) to post something far more intelligent than anything I can create.

Anyway, I’ve been thinking more about some of these networking-virtualization chimeras, and I thought it might be interesting to talk about them, if for no other reason than to encourage the RSGs to correct me and help everyone understand a little better.

<aside>A chimera, by the way, was a mythological fire-breathing creature that was part lion, part goat, and part serpent; more generically, the word refers to any sort of organism that has two groups of genetically distinct cells. In layman’s terms, it’s something that is a mix of two other things.</aside>

Here are some of the networking-virtualization chimeras I’ve concocted:

  • FabricPath/TRILL on the hypervisor: See this blog post for more details. It turns out, at least at first glance, that this particular combination doesn’t seem to buy us much. The push for large L2 domains that seemed to fuel FabricPath and TRILL now seems to be abating in favor of network overlays and L3 routing.

  • MPLS-in-IP on the hypervisor: I also wrote about this strange concoction here. At first, I thought I was being clever and sidestepping some issues by bringing MPLS support into the hypervisor, but in thinking more about this I realize I’m wrong. Sure, we could encapsulate VM-to-VM traffic into MPLS, then encapsulate MPLS in UDP, but how is that any better than just encapsulating VM-to-VM traffic in VXLAN? It isn’t. (Not to mention that Ivan Pepelnjak set the record straight.)

  • LISP on the hypervisor: I thought this was a really good idea; by enabling LISP on the hypervisor and essentially making the hypervisor an ITR/ETR (see here for more LISP info), inter-DC vMotion becomes a snap. Want to use a completely routed access layer? No problem. Of course, that assumes all your WAN and data center equipment are LISP-capable and enabled/configured for LISP. I’m not the only one who thought this idea was cool, either. I’m sure there are additional problems/considerations of which I’m not aware, though—networking gurus, want to chime in and educate me on what I’m missing?

  • OTV on the hypervisor: This one isn’t really very interesting, as it bears great similarity to VXLAN (both OTV and VXLAN, to my knowledge, use very similar frame formats and encapsulation schemes). Is there something else here I’m missing?

  • VXLAN on physical switches: This one is interesting, even necessary according to some experts. Enabling VXLAN VTEP (VXLAN Tunnel End Point) termination on physical switches might also address some of the odd traffic patterns that would result from the use of VXLAN (see here for a simple example). Arista Networks demonstrated this functionality at VMworld 2012 in San Francisco, so this particular networking-virtualization mashup is probably closer to reality than any of the others.

  • OpenFlow on the hypervisor: Open vSwitch (OVS) already supports OpenFlow, so you might say that this mashup already exists. It’s not unreasonable to think Nicira might port OVS to VMware vSphere, which would bring an OpenFlow-compatible virtual switch to a much larger installed base. The missing piece is, of course, an OpenFlow controller. While an interesting mental exercise, I’m keenly interested to know what sort of real-world problems this might help solve, and would love to hear from any OpenFlow experts out there what they think.

  • Virtualizing physical switches: No, I’m not talking about running switch software on the hypervisor (think Nexus 1000V). Instead, I’m thinking more along the lines of FlowVisor, which in effect virtualizes a switch’s control plane so that multiple “slices” of a switch can be independently controlled by an external OpenFlow controller. If you’re familiar with NetApp, think of their “vfiler” construct, or think of the Virtual Device Contexts (VDCs) in a Nexus 7000. However, I’m thinking of something more device-independent than Nexus 7000 VDCs. As more and more switches move to x86 hardware, this seems like it might be something that could really take off. Multi-tenancy support (each “virtual switch instance” being independently managed), traffic isolation, QoS, VLAN isolation…lots of possibilities exist here.

Are there any other groupings that are worth exploring or discussing? Any other “you got your virtualization peanut butter in my networking chocolate” combinations that might help address some of the issues in data centers today? Feel free to speak up in the comments below. Courteous comments are invited and encouraged.

Tags: , , , , , ,

Welcome to Technology Short Take #27! This is my usual collection of links, thoughts, rants, and ideas about data center-related technologies. Here’s hoping you find something useful!

Networking

  • If you’re interested in learning more about OpenFlow and software-defined networking but need to do this on a shoestring budget in your home lab, a number of guides have been written to help out. I haven’t personally used any of these guides yet, but I’m working my way in that direction. (I needed to fill in some other knowledge gaps first.) First up is Brent Salisbury’s how to build an SDN lab without needing OpenFlow hardware. Brent is creating some fantastic content that I’ve found extremely useful. His earlier post on getting started with OpenFlow and Open vSwitch tutorial lab is also quite good. Another good resource is Dan Hersey’s guide to building an SDN-based private cloud in an hour. I encourage you to have a look at these posts if you’re at all interested in any of these technologies.
  • Bruce Davie and Martin Casado (with Nicira, now part of VMware) have written a post comparing the VXLAN and STT tunneling protocols. Not unsurprisingly, one of the key advantages of STT that’s highlighted is its improved performance due to TSO support in NIC hardware. VXLAN, on the other hand, is seeing broader adoption across multiple vendors. There’s no mention of NVGRE (or just plain GRE).
  • Related to the bare metal provisioning work (see below under “Servers/Hardware”), Mirantis also detailed some bare-metal networking stuff they’ve done for OpenStack in relation to the use of bare metal nodes.

Servers/Hardware

  • Mirantis published an article discussing a framework they built for bare-metal provisioning with OpenStack that allows OpenStack to place workloads onto bare-metal nodes instead of onto a hypervisor. It’s interesting work, but unfortunately it looks like this work won’t be returned to the community (it was developed for one or more of their clients). There are also a few follow-up posts, such as this one on placement control and multi-tenancy isolation and this one on preparing images for bare metal nodes. Also see the “Networking” section above for a related post on the networking aspects involved.

Security

I don’t have anything for this area this time around, but I’ll stay alert for articles to add next time. Feel free to share something in the comments!

Cloud Computing/Cloud Management

  • I might have mentioned this before, but Ken Pepple’s OpenStack Folsom architecture post is just awesome. It’s well worth reading and reviewing in depth.
  • This OpenStack-on-Debian HOWTO is a bit older (and probably out of date), but it does give a decent overview of the components that are involved and—via the configuration—how they relate to each other. While the details for installing a current version of OpenStack are likely to be different now, you might still find this conceptually helpful.
  • These articles are a bit long in the tooth, but CSS Corp has a useful series of articles on bundling various Linux distributions for use with OpenStack: bundling CentOS, bundling CentOS with VNC, bundling Debian, and bundling OpenSUSE. It would be interesting to me to see how much of this, if any, could be automated with something like Puppet. If any enterprise Puppet experts want to give it a go, I’d be happy to publish a guest blog post for you with full details on how it’s done.
  • Much like there are some great “how to’s” on how to run an SDN lab (see the Networking section earlier), there are also some great write-ups on doing the same for OpenStack. For example, Cody Bunch published this article on running OpenStack Private Cloud on ESXi, and Brent Salisbury (there he is again!) posted an older guide to OpenStack Essex on Ubuntu on VirtualBox as well as a newer guide to OpenStack DevStack on Fusion.

Operating Systems/Applications

Storage

  • I don’t fully understand all the details involved, but this post on changes in block protocol scalability in Xen outlines what sounds like good progress in improving efficiency.
  • This article is a bit older, published at the start of October, but it talks about an interesting project (product?) by Qlogic called “Mt. Rainier.” (Stu Miniman of Wikibon has more information here as well.) Apparently, “Mt. Rainier” will allow customers to combine PCIe-based SSD storage inside servers into a “virtual SAN” (now there’s an original and not over-used term). The really interesting aspect, in my opinion, is the use of “Mt. Rainier” to create shared caches across servers. Is this the beginning of the data center fractal edge?

Virtualization

  • Big news in the QEMU world: In the QEMU 1.3 release, the QEMU-KVM and QEMU projects have been merged. Why is this important? It’s first necessary to understand the relationship between QEMU and KVM. KVM is the set of kernel modules that leverage hardware virtualization functionality inside Intel and AMD CPUs, and it makes possible the virtualization of closed-source operating systems like Windows. QEMU, on the other hand, is needed to emulate everything else that a VM needs: networking, storage, USB, keyboard, mouse, etc. Both KVM and QEMU are needed for a full virtualization solution. Until the 1.3 release, QEMU (without hardware acceleration via KVM) was one branch, and QEMU-KVM (with KVM hardware acceleration) was a separate branch. The QEMU 1.3 release completes an effort to merge both efforts into a single development tree.
  • The merge of QEMU and QEMU-KVM isn’t the only cool thing happening with QEMU; also included in the 1.3 release is GlusterFS integration. This integration dramatically improves GlusterFS performance by allowing QEMU’s block layer to communicate directly with the Gluster backend without going through the userspace FUSE components.
  • Erik Scholten of VMGuru.nl has posted a good hypervisor feature comparison document. It includes RHEV 3.1 in the comparison, even though RHEV 3.1 wasn’t released (was still in beta) at the time the comparison was written.
  • Speaking of RHEV: apparently RHEV 3.1 was released yesterday (Wednesday, December 4, 2012), although I haven’t been able to find any sort of official press release or announcement.
  • Debunking an argument I’ve heard quite a bit is this article by Frank Denneman on using SIOC with multiple datastores backed by a single pool of disks.
  • Need to compact a virtual hard disk in Windows 8/Windows Server 2012? Ben Armstrong shows how here.
  • I enjoyed this article by Josh Townsend on using SUSE Studio and HAProxy to create a (free) open source load balancing solution for VMware View.

That’s it for this time around; no need to overwhelm you with too much information! Besides, I have to keep a few items around for Technology Short Take #28…

As always, comments, thoughts, rants, or corrections are welcome below.

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #26! As you might already know, the Technology Short Takes are my irregularly-published collections of links, articles, thoughts, and (sometimes) rants. I hope you find something useful here!

Networking

  • Chris Colotti, as part of a changed focus in his role at VMware, has been working extensively with Nicira NVP. He’s had a couple of good posts; this one is a primer on how NVP works, and this one discusses the use of the Open vSwitch (OVS) vApp. As I mentioned before in other posts, OVS is popping up in more and more places—it might be a good idea to make sure you’re familiar with it.
  • This article by Ivan Pepelnjak on VXLAN termination on physical devices is over a year old, but still very applicable—especially considering Arista Networks recently announced their 7150S switch, which sports hardware VTEP (VXLAN Tunnel End Point) support (meaning that it can terminate VXLAN segments).
  • Brad Hedlund dives into Midokura Midonet in this post on L2-L4 network virtualization. It’s a good overview (thanks Brad!) and worth reading if you want to get up to speed on what Midokura is doing. (Oh, just as an aside: note that Midokura leverages OVS in their solution. Just saying…)
  • This blog post provides more useful information from Kamau Wanguhu on VXLAN and proxy ARP. Kamau also has an interesting post on network virtualization, although—to be honest—the post is long on messaging/positioning and short on technical information. I prefer the latter instead of the former.

Servers/Hardware

  • This mention of the Dell PowerEdge M I/O Aggregator looks interesting, although I’m still not real clear on exactly what it is or how it works. I guess this first article was a tease?

Security

Nothing this time around, but I’ll stay alert for items to include in future posts!

Cloud Computing/Cloud Management

  • Want to know a bit more about how to configure VXLAN inside VCD? Rawlinson Rivera has a nice write-up that is worth reviewing.
  • Clint Kitson, an EMC vSpecialist, talks about some VCD integrity scripts he created. Looks like some pretty cool stuff—great work, Clint!
  • For the past couple of weeks I’ve been (slowly) reading Kevin Jackson’s OpenStack Cloud Computing Cookbook; it’s very useful. It’s worth a read if you want to get up to speed on OpenStack; naturally, you can get it from Amazon.

Operating Systems/Applications

  • At the intersection of cloud-based storage and configuration management, I happened to find this very interesting Puppet module designed to fetch and update files from an S3 bucket. Through this module, you could store files in S3 instead of using Puppet’s built-in file server. (By the way, this module also works with OpenStack Swift as well.)
  • One of the things I’ve complained about regarding newer versions of OS X is the “hiding” of the Unix underpinnings. Perhaps I should read this book and see if my thinking is unfounded?

Storage

  • Chris Evans takes a look at Hyper-V 3.0′s Virtual Fibre Channel feature in this write-up. From what I’ve read, it sounds like Hyper-V’s NPIV implementation is more robust than VMware’s broken and busted NPIV implementation. (If you don’t know why I say that about VMware’s implementation, ask anyone who’s tried to use it.) The real question is this: is NPIV support in a hypervisor of any value any longer?
  • Gina Minks (formerly of Dell, now with Inktank) recommended I have a look at Ceph and mentioned this post on migrating to Ceph (with a little libvirt thrown in).
  • Gluster might be another project that I need to spend some time examining; this post on using Gluster with oVirt 3.1 looks interesting. Anyone have any pointers for a Gluster beginner?
  • Mirantis has a post about some Nova Volume integration with Isilon. I’ve often said that I think scale-out platforms like Isilon (among others) are an important foundation for future storage solutions. It’s cool to see some third-party development happening to integrate Isilon and OpenStack.

Virtualization

That’s all for this time around. As always, courteous comments are welcome (encouraged, in fact!), so feel free to speak up in the comments below. I’d love to hear your feedback.

Tags: , , , , , , , , , , , , , ,

Back in early March I was invited to speak at the South Florida VMUG, and I gave this presentation on vSphere networking challenges and solutions. The idea behind the presentation was to give attendees some visibility into IEEE and IETF efforts at creating new network technologies and protocols. I’m posting it here just in case someone might find it useful or helpful.

As always, your questions, corrections, or clarifications are welcome in the comments below.

Tags: , , , ,

As I was reviewing my list of actions in OmniFocus this morning, I saw an action I’d added a while back to review RFC 4023. RFC 4023 is the RFC that defines MPLS-in-IP and MPLS-in-GRE encapsulations, and was written in 2005. I thought, “Let me just read this real quick.” So I did.

It’s not a terribly long RFC (only about 13 pages or so), but I’ll attempt to summarize it here. (Networking gurus, feel free to correct my summary if I get it wrong.) The basic idea behind RFC 4023 is to allow two MPLS-capable LSRs (an LSR is a label switching router) that are adjacent to each other with regard to a Label Switched Path (LSP) to communicate over an intermediate IP network that does not support MPLS. Essentially, it tunnels MPLS inside IP (or GRE).

“OK,” you say. “So what?”

Well, here’s my line of thinking:

  1. All networking experts agree that we need to move away from the massive layer 2 networks we’re building to support virtualized applications in enterprise data centers. (Case in point: see this post from Ivan on a layer 2 network being a single failure domain.)

  2. Networking experts also seem to agree that the ideal solution is IP-based at layer 3. It’s ubiquitous and well understood.

  3. However, layer 3 alone doesn’t provide the necessary isolation and multi-tenancy features that we all believe are necessary for cloud environments.

  4. Therefore, we need to provide some sort of additional isolation but also maintain layer 3 connectivity. Hence, the rise of protocols such as VXLAN and NVGRE, that isolate traffic using some sort of virtual network identifier (VNI) and wrap traffic inside IP (or GRE, as in the case of NVGRE).

It seems to me—and I freely admit that I could be mistaken, based on my limited knowledge of MPLS thus far—that MPLS-in-IP could accomplish the same thing: provide isolation between tenants and maintain layer 3 connectivity. Am I wrong? Why not build MPLS-in-IP endpoints (referred to in RFC 4023 as “tunnel head” and “tunnel tail”) directly into our virtualization hosts and build RFC 4023-style tunnels between them? Wouldn’t this solve the same problem that newer protocols such as VXLAN, NVGRE, and STT are attempting to solve, but with protocols that are already well-defined and understood?

Perhaps my understanding is incorrect. Help me understand—speak up in the comments!

Tags: , ,

« Older entries