VXLAN

You are currently browsing articles tagged VXLAN.

Welcome to Technology Short Take #33, the latest in my irregularly-published series of articles discussing various data center technology-related links, articles, rants, thoughts, and questions. I hope that you find something useful here. Enjoy!

Networking

  • Tom Nolle asks the question, “Is virtualization reality even more elusive than virtual reality?” It’s a good read; the key thing that I took away from it was that SDN, NFV, and related efforts are great, but what we really need is something that can pull all these together in a way that customers (and providers) reap the benefits.
  • What happens when multiple VXLAN logical networks are mapped to the same multicast group? Venky explains it in this post. Venky also has a great write-up on how the VTEP (VXLAN Tunnel End Point) learns and creates the forwarding table.
  • This post by Ranga Maddipudi shows you how to use App Firewall in conjunction with VXLAN logical networks.
  • Jason Edelman is on a roll with a couple of great blog posts. First up, Jason goes off on a rant about network virtualization, briefly hitting topics like the relationship between overlays and hardware, the role of hardware in network virtualization, the changing roles of data center professionals, and whether overlays are the next logical step in the evolution of the network. I particularly enjoyed the snippet from the post by Bill Koss. Next, Jason dives a bit deeper on the relationship between network overlays and hardware, and shares his thoughts on where it does—and doesn’t—make sense to have hardware terminating overlay tunnels.
  • Another post by Tom Nolle explores the relationship—complicated at times—between SDN, NFV, and the cloud. Given that we define the cloud (sorry to steal your phrase, Joe) as elastic, pooled resources with self-service functionality and ubiquitous access, I can see where Tom states that to discuss SDN or NFV without discussing cloud is silly. On the flip side, though, I have to believe that it’s possible for organizations to make a gradual shift in their computing architectures and processes, so one almost has to discuss these various components individually, because to tie them all together makes it almost impossible. Thoughts?
  • If you haven’t already introduced yourself to VXLAN (one of several draft protocols used as an overlay protocol), Cisco Inferno has a reasonable write-up.
  • I know Steve Jin, and he’s a really smart guy. I must disagree with some of his statements regarding what software-defined networking is and is not and where it fits, written back in April. I talked before about the difference between network virtualization and SDN, so no need to mention that again. Also, the two key flaws that Steve identifies—single point of failure and scalability—aren’t flaws with SDN/network virtualization, but rather flaws in an implementation of said technologies, IMHO.

Servers/Hardware

  • Correction from the last Technology Short Take—I incorrectly stated that the HP Moonshot offerings were ARM-based, and therefore wouldn’t support vSphere. I was wrong. The servers (right now, at least) are running Intel Atom S1260 CPUs, which are x86-based and do offer features like Intel VT-x. Thanks to all who pointed this out, and my apologies for the error!
  • I missed this on the #vBrownBag series: designing HP Virtual Connect for vSphere 5.x.

Security

Cloud Computing/Cloud Management

  • Hyper-V as hypervisor with OpenStack Compute? Sure, see here.
  • Cody Bunch, who has been focusing quite a bit on OpenStack recently, has a nice write-up on using Razor and Chef to automate an OpenStack build. Part 1 is here; part 2 is here. Good stuff—keep it up, Cody!
  • I’ve mentioned in some of my OpenStack presentations (see SpeakerDeck or Slideshare) that a great place to start if you’re just getting started is DevStack. Here, Brent Salisbury has a nice write-up on using DevStack to install OpenStack Grizzly.

Operating Systems/Applications

  • Boxen, a tool created by GitHub to manage their OS X Mountain Lion laptops for developers, looks interesting. Might be a useful tool for other environments, too.
  • If you use TextMate2 (I switched to BBEdit a little while ago after being a long-time TextMate user), you might enjoy this quick post by Colin McNamara on Puppet syntax highlighting using TextMate2.

Storage

  • Anyone have more information on Jeda Networks? They’ve been mentioned a couple of times on GigaOm (here and here), but I haven’t seen anything concrete yet. Hey, Stephen Foskett, if you’re reading: get Jeda Networks to the next Tech Field Day.
  • Tim Patterson shares some code from Luc Dekens that helps check VMFS version and block sizes using PowerCLI. This could come in quite handy in making sure you know how your datastores are configured, especially if you are in the midst of a migration or have inherited an environment from someone else.

Virtualization

  • Interested in using SAML and Horizon Workspace with vCloud Director? Tom Fojta shows you how.
  • If you aren’t using vSphere Host Profiles, this write-up on the VMware SMB blog might convince you why you should and show you how to get started.
  • Michael Webster tackles the question: is now the best time to upgrade to vSphere 5.1? Read the full post to see what Michael has to say about it.
  • Duncan points out an easy error to make when working with vSphere HA heartbeat datastores in this post. Key takeaway: sometimes the fix is a lot simpler than we might think at first. (I know I’m guilty of making things more complicated than they need to be at times. Aren’t we all?)
  • Jon Benedict (aka “Captain KVM”) shares a script he wrote to help provide high availability for RHEV-M.
  • Chris Wahl has a nice write-up on using log shipping to protect your vCenter database. It’s a bit over a year old (surprised I missed it until now), and—as Chris points out—log shipping doesn’t protect the database (primary and secondary copies) against corruption. However, it’s better than nothing (which I suspect it what far too many people are using).

Other

  • If you aspire to be a writer—whether that be a blogger, author, journalist, or other—you might find this article on using the DASH method for writing to be helpful. The six tips at the end of the article are especially helpful, I think.

Time to wrap this up for now; the rest will have to wait until the next Technology Short Take. Until then, feel free to share your thoughts, questions, or rants in the comments below. Courteous comments are always welcome!

Tags: , , , , , , , , , , , , , ,

Welcome to Technology Short Take #32, the latest installment in my irregularly-published series of link collections, thoughts, rants, raves, and miscellaneous information. I try to keep the information linked to data center technologies like networking, storage, virtualization, and the like, but occasionally other items slip through. I hope you find something useful.

Networking

  • Ranga Maddipudi (@vCloudNetSec on Twitter) has put together two blog posts on vCloud Networking and Security’s App Firewall (part 1 and part 2). These two posts are detailed, hands-on, step-by-step guides to using the vCNS App firewall—good stuff if you aren’t familiar with the product or haven’t had the opportunity to really use it.
  • The sentiment behind this post isn’t unique to networking (or networking engineers), but that was the original audience so I’m including it in this section. Nick Buraglio climbs on his SDN soapbox to tell networking professionals that changes in the technology field are part of life—but then provides some specific examples of how this has happened in the past. I particularly appreciated the latter part, as it helps people relate to the fact that they have undergone notable technology transitions in the past but probably just don’t realize it. As I said, this doesn’t just apply to networking folks, but to everyone in IT. Good post, Nick.
  • Some good advice here on scaling/sizing VXLAN in VMware deployments (as well as some useful background information to help explain the advice).
  • Jason Edelman goes on a thought journey connecting some dots around network APIs, abstractions, and consumption models. I’ll let you read his post for all the details, but I do agree that it is important for the networking industry to converge on a consistent set of abstractions. Jason and I disagree that OpenStack Networking (formerly Quantum) should be the basis here; he says it shouldn’t be (not well-known in the enterprise), I say it should be (already represents work created collaboratively by multiple vendors and allows for different back-end implementations).
  • Need a reasonable introduction to OpenFlow? This post gives a good introduction to OpenFlow, and the author takes care to define OpenFlow as accurately and precisely as possible.
  • SDN, NFV—what’s the difference? This post does a reasonable job of explaining the differences (and the relationship) between SDN and NFV.

Servers/Hardware

  • Chris Wahl provides a quick overview of the HP Moonshot servers, HP’s new ARM-based offerings. I think that Chris may have accidentally overlooked the fact that these servers are not x86-based; therefore, a hypervisor such as vSphere is not supported. Linux distributions that offer ARM support, though—like Ubuntu, RHEL, and SuSE—are supported, however. The target market for this is massively parallel workloads that will benefit from having many different cores available. It will be interesting to see how the support of a “Tier 1″ hardware vendor like HP affects the adoption of ARM in the enterprise.

Security

  • Ivan Pepelnjak talks about a demonstration of an attack based on VM BPDU spoofing. In vSphere 5.1, VMware addressed this potential issue with a feature called BPDU Filter. Check out how to configure BPDU Filter here.

Cloud Computing/Cloud Management

  • Check out this post for some vCloud Director and RHEL 6.x interoperability issues.
  • Nick Hardiman has a good write-up on the anatomy of an AWS CloudFormation template.
  • If you missed the OpenStack Summit in Portland, Cody Bunch has a reasonable collection of Summit summary posts here (as well as materials for his hands-on workshops here). I was also there, and I have some session live blogs available for your pleasure.
  • We’ve probably all heard the “pets vs. cattle” argument applied to virtual machines in a cloud computing environment, but Josh McKenty of Piston Cloud Computing asks whether it is now time to apply that thinking to the physical hosts as well. Considering that the IT industry still seems to be struggling with applying this line of thinking to virtual systems, I suspect it might be a while before it applies to physical servers. However, Josh’s arguments are valid, and definitely worth considering.
  • I have to give Rob Hirschfeld some credit for—as a member of the OpenStack Board—acknowledging that, in his words, “we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.” Open, honest, transparent dealings and self-assessments are critically important for a project like OpenStack to succeed, so kudos to Rob for posting a list of some of the challenges facing the project as adoption, visibility, and development accelerate.

Operating Systems/Applications

Nothing this time around, but I’ll stay alert for items to add next time.

Storage

  • Nigel Poulton tackles the question of whether ASIC (application-specific integrated circuit) use in storage arrays elongates the engineering cycles needed to add new features. This “double edged sword” argument is present in networking as well, but this is the first time I can recall seeing the question asked about modern storage arrays. While Nigel’s article specifically refers to the 3PAR ASIC and its relationship to “flash as cache” functionality, the broader question still stands: at what point do the drawbacks of ASICs begin to outweight the benefits?
  • Quite some time ago I pointed readers to a post about Target Driven Zoning from Erik Smith at EMC. Erik recently announced that TDZ works after a successful test run in a lab. Awesome—here’s hoping the vendors involved will push this into the market.
  • Using iSER (iSCSI Extensions for RDMA) to accelerate iSCSI traffic seems to offer some pretty promising storage improvements (see this article), but I can’t help but feel like this is a really complex solution that may not offer a great deal of value moving forward. Is it just me?

Virtualization

  • Kevin Barrass has a blog post on the VMware Community site that shows you how to create VXLAN segments and then use Wireshark to decode and view the VXLAN traffic, all using VMware Workstation.
  • Andre Leibovici explains how Horizon View Multi-VLAN works and how to configure it.
  • Looking for a good list of virtualization and cloud podcasts? Look no further.
  • Need Visio stencils for VMware? Look no further.
  • It doesn’t look like it has changed much from previous versions, but nevertheless some people might find it useful: a “how to” on virtualization with KVM on CentOS 6.4.
  • Captain KVM (cute name, a take-off of Captain Caveman for those who didn’t catch it) has a couple of posts on maximizing 10Gb Ethernet on KVM and RHEV (the KVM post is here, the RHEV post is here). I’m not sure that I agree with his description of LACP bonds (“2 10GbE links become a single 20GbE link”), since any given flow in a LACP configuration can still only use 1 link out of the bond. It’s more accurate to say that aggregate bandwidth increases, but that’s a relatively minor nit overall.
  • Ben Armstrong has a write-up on how to install Hyper-V’s integration components when the VM is offline.
  • What are the differences between QuickPrep and Sysprep? Jason Boche’s got you covered.

I suppose that’s enough information for now. As always, courteous comments are welcome, so feel free to add your thoughts in the comments below. Thanks for reading!

Tags: , , , , , , , , , , , ,

Dan Wendlandt said something today in the NVP deep dive session (liveblog of the session here) that really crystallized something for me. I thought perhaps I might be the only one that was seeing a trend, but Dan’s comment leads me to believe there are others seeing this trend as well. Here’s the quote, taken from my liveblog of the session:

It is important to note, as Dan does, that a tunneling protocol alone is not network virtualization.

There’s a lot of buzz in the industry about network virtualization and network overlays, and often those terms are used interchangeably. People talk about the need for multi-tenancy and address space isolation, point to network overlays like VXLAN, NVGRE, and STT as the answer to all our problems, and in so doing they (inadvertently) conflate network overlays with network virtualization. Network overlays and network virtualization aren’t the same thing, and people that use them interchangeably probably don’t fully understand what’s involved.

<aside>By the way, if you’re not familiar with the idea of network overlays, I’d recommend reading this, this, and this to get you started. There’s plenty more out there, but those three articles will at least prime the pump, I think.</aside>

Network overlays are great for address space isolation (for example, isolating duplicate MAC addresses, duplicate IP addresses, or duplicate VLAN IDs). As such, network overlays can be an important part of network virtualization. You need more than a network overlay, though, to have network virtualization; you also need virtualized network services (like NAT, firewalls, ACLs, QoS, routing, and the like) and you need a control plane (else how would you coordinate the various pieces within the network virtualization solution?). The overlay protocol is just one piece of the puzzle, so using “network overlay” interchangeably with “network virtualization” is incorrect.

As always, I welcome the input of those more educated/knowledgeable than I am. If you’re a networking expert (or a virtual networking expert), feel free to speak up in the comments and correct my misunderstanding or misconceptions (please disclose vendor affiliations). I’m always open to deepening my knowledge—and helping others with their understanding and knowledge along the way.

Tags: , ,

Welcome to Technology Short Take #28, the first Technology Short Take for 2013. As always, I hope that you find something useful or informative here. Enjoy!

Networking

  • Ivan Pepelnjak recently wrote a piece titled “Edge and Core OpenFlow (and why MPLS is not NAT)”. It’s an informative piece—Ivan’s stuff is always informative—but what really drew my attention was his mention of a paper by Martin Casado, Teemu Koponen, and others that calls for a combination of MPLS and OpenFlow (and an evolution of OpenFlow into “edge” and “core” versions) to build next-generation networks. I’ve downloaded the paper and intend to review it in more detail. I’d love to hear from any networking experts who’ve read the paper—what are your thoughts?
  • Speaking of Ivan…it also appears that he’s quite pleased with Microsoft’s implementation of NVGRE in Hyper-V. Sounds like some of the other vendors need to get on the ball.
  • Here’s a nice explanation of CloudStack’s physical networking architecture.
  • The first fruits of Brad Hedlund’s decision to join VMware/Nicira have shown up in this joint article by Brad, Bruce Davie, and Martin Casado describing the role of network virutalization in the software-defined data center. (It doesn’t matter how many times I say or write “software-defined data center,” it still feels like a marketing term.) This post is fairly high-level and abstract; I’m looking forward to seeing more detailed and in-depth posts in the future.
  • Art Fewell speculates that the networking industry has “lost our way” and become a “big bag of protocols” in this article. I do agree with one of the final conclusions that Fewell makes in his article: that SDN (a poorly-defined and often over-used term) is the methodology of cloud computing applied to networking. Therefore, SDN is cloud networking. That, in my humble opinion, is a more holistic and useful way of looking at SDN.
  • It appears that the vCloud Connector posts (here and here) that (apparently) incorrectly identify VXLAN as a component/prerequisite of vCloud Connector have yet to be corrected. (Hat tip to Kenneth Hui at VCE.)

Servers/Hardware

Nothing this time around, but I’ll watch for content to include in future posts.

Security

  • Here’s a link to a brief (too brief, in my opinion, but perhaps I’m just being overly critical) post on KVM virtualization security, authored by Dell TechCenter. It provides some good information on securing the libvirt communication channel.

Cloud Computing/Cloud Management

  • Long-time VMware users probably remember Mike DiPetrillo, whose website has now, unfortunately, gone offline. I mention this because I’ve had this article on RabbitMQ AMQP with vCloud Director sitting in my list of “articles to write about” for a while, but some of the images were missing and I couldn’t find a link for the article. I finally found a link to a reprinted version of the article on DZone Enterprise Integration. Perhaps the article will be of some use to someone.
  • Sam Johnston talks about reliability in the cloud with a discussion on the merits of “reliable software” (software designed for failure) vs. “unreliable software” (more traditional software not designed for failure). It’s a good article, but I found the discussion between Sam and Massimo (of VMware) as equally useful.

Operating Systems/Applications

Storage

  • Want some good details on the space-efficient sparse disk format in vSphere 5.1? Andre Leibovici has you covered right here.
  • Read this article for good information from Andre on a potential timeout issue with recomposing desktops and using the View Storage Accelerator (aka context-based read cache, CRBC).
  • Apparently Cormac Hogan, aka @VMwareStorage on Twitter, hasn’t gotten the memo that “best practices” is now outlawed. He should have named this series on NFS with vSphere “NFS Recommended Practices”, but even misnamed as they are, the posts still have useful information. Check out part 1, part 2, and part 3.
  • If you’d like to get a feel for how VMware sees the future of flash storage in vSphere environments, read this.

Virtualization

  • This is a slightly older post, but informative and useful nevertheless. Cormac posted an article on VAAI offloads and KAVG latency when observed in esxtop. The summary of the article is that the commands esxtop is tracking are internal to the ESXi kernel only; therefore, abnormal KAVG values do not represent any sort of problem. (Note there’s also an associated VMware KB article.)
  • More good information from Cormac here on the use of the SunRPC.MaxConnPerIP advanced setting and its impact on NFS mounts and NFS connections.
  • Another slightly older article (from September 2012) is this one from Frank Denneman on how vSphere 5.1 handles parallel Storage vMotion operations.
  • A fellow IT pro contacted me on Twitter to see if I had any idea why some shares on his Windows Server VM weren’t working. As it turns out, the problem is related to hotplug functionality; the OS sees the second drive as “removable” due to hotplug functionality, and therefore shares don’t work. The problem is outlined in a bit more detail here.
  • William Lam outlines how to use new tagging functionality in esxcli in vSphere 5.1 for more comprehensive scripted configurations. The new tagging functionality—if I’m reading William’s write-up correctly—means that you can configure VMkernel interfaces for any of the supported traffic types via esxcli. Neat.
  • Chris Wahl has a nice write-up on the behavior of Network I/O Control with multi-NIC vMotion traffic. It was pointed out in the comments that the behavior Chris describes is documented, but the write-up is still handy, and an important factor to keep in mind in your designs.

I suppose I should end it here, before this “short take” turns into a “long take”! In any case, courteous comments are always welcome, so if you have additional information, clarifications, or corrections to share regarding any of the articles or links in this post, feel free to speak up below.

Tags: , , , , , , , , , , , , ,

I like to spend time examining the areas where different groups of technologies intersect. Personally, I find this activity fascinating, and perhaps that’s the reason that I find myself pursing knowledge and experience in virtualization, networking, storage, and other areas simultaneously—it’s an effort to spend more time “on the border” between various technologies.

One border, in particular, is very interesting to me: the border between virtualization and networking. Time spent thinking about the border between networking and virtualization is what has generated posts like this one, this one, or this one. Because I’m not a networking expert (yet), most of the stuff I generate is junk, but at least it keeps me entertained—and it occasionally prods the Really Smart Guys (RSGs) to post something far more intelligent than anything I can create.

Anyway, I’ve been thinking more about some of these networking-virtualization chimeras, and I thought it might be interesting to talk about them, if for no other reason than to encourage the RSGs to correct me and help everyone understand a little better.

<aside>A chimera, by the way, was a mythological fire-breathing creature that was part lion, part goat, and part serpent; more generically, the word refers to any sort of organism that has two groups of genetically distinct cells. In layman’s terms, it’s something that is a mix of two other things.</aside>

Here are some of the networking-virtualization chimeras I’ve concocted:

  • FabricPath/TRILL on the hypervisor: See this blog post for more details. It turns out, at least at first glance, that this particular combination doesn’t seem to buy us much. The push for large L2 domains that seemed to fuel FabricPath and TRILL now seems to be abating in favor of network overlays and L3 routing.

  • MPLS-in-IP on the hypervisor: I also wrote about this strange concoction here. At first, I thought I was being clever and sidestepping some issues by bringing MPLS support into the hypervisor, but in thinking more about this I realize I’m wrong. Sure, we could encapsulate VM-to-VM traffic into MPLS, then encapsulate MPLS in UDP, but how is that any better than just encapsulating VM-to-VM traffic in VXLAN? It isn’t. (Not to mention that Ivan Pepelnjak set the record straight.)

  • LISP on the hypervisor: I thought this was a really good idea; by enabling LISP on the hypervisor and essentially making the hypervisor an ITR/ETR (see here for more LISP info), inter-DC vMotion becomes a snap. Want to use a completely routed access layer? No problem. Of course, that assumes all your WAN and data center equipment are LISP-capable and enabled/configured for LISP. I’m not the only one who thought this idea was cool, either. I’m sure there are additional problems/considerations of which I’m not aware, though—networking gurus, want to chime in and educate me on what I’m missing?

  • OTV on the hypervisor: This one isn’t really very interesting, as it bears great similarity to VXLAN (both OTV and VXLAN, to my knowledge, use very similar frame formats and encapsulation schemes). Is there something else here I’m missing?

  • VXLAN on physical switches: This one is interesting, even necessary according to some experts. Enabling VXLAN VTEP (VXLAN Tunnel End Point) termination on physical switches might also address some of the odd traffic patterns that would result from the use of VXLAN (see here for a simple example). Arista Networks demonstrated this functionality at VMworld 2012 in San Francisco, so this particular networking-virtualization mashup is probably closer to reality than any of the others.

  • OpenFlow on the hypervisor: Open vSwitch (OVS) already supports OpenFlow, so you might say that this mashup already exists. It’s not unreasonable to think Nicira might port OVS to VMware vSphere, which would bring an OpenFlow-compatible virtual switch to a much larger installed base. The missing piece is, of course, an OpenFlow controller. While an interesting mental exercise, I’m keenly interested to know what sort of real-world problems this might help solve, and would love to hear from any OpenFlow experts out there what they think.

  • Virtualizing physical switches: No, I’m not talking about running switch software on the hypervisor (think Nexus 1000V). Instead, I’m thinking more along the lines of FlowVisor, which in effect virtualizes a switch’s control plane so that multiple “slices” of a switch can be independently controlled by an external OpenFlow controller. If you’re familiar with NetApp, think of their “vfiler” construct, or think of the Virtual Device Contexts (VDCs) in a Nexus 7000. However, I’m thinking of something more device-independent than Nexus 7000 VDCs. As more and more switches move to x86 hardware, this seems like it might be something that could really take off. Multi-tenancy support (each “virtual switch instance” being independently managed), traffic isolation, QoS, VLAN isolation…lots of possibilities exist here.

Are there any other groupings that are worth exploring or discussing? Any other “you got your virtualization peanut butter in my networking chocolate” combinations that might help address some of the issues in data centers today? Feel free to speak up in the comments below. Courteous comments are invited and encouraged.

Tags: , , , , , ,

Welcome to Technology Short Take #27! This is my usual collection of links, thoughts, rants, and ideas about data center-related technologies. Here’s hoping you find something useful!

Networking

  • If you’re interested in learning more about OpenFlow and software-defined networking but need to do this on a shoestring budget in your home lab, a number of guides have been written to help out. I haven’t personally used any of these guides yet, but I’m working my way in that direction. (I needed to fill in some other knowledge gaps first.) First up is Brent Salisbury’s how to build an SDN lab without needing OpenFlow hardware. Brent is creating some fantastic content that I’ve found extremely useful. His earlier post on getting started with OpenFlow and Open vSwitch tutorial lab is also quite good. Another good resource is Dan Hersey’s guide to building an SDN-based private cloud in an hour. I encourage you to have a look at these posts if you’re at all interested in any of these technologies.
  • Bruce Davie and Martin Casado (with Nicira, now part of VMware) have written a post comparing the VXLAN and STT tunneling protocols. Not unsurprisingly, one of the key advantages of STT that’s highlighted is its improved performance due to TSO support in NIC hardware. VXLAN, on the other hand, is seeing broader adoption across multiple vendors. There’s no mention of NVGRE (or just plain GRE).
  • Related to the bare metal provisioning work (see below under “Servers/Hardware”), Mirantis also detailed some bare-metal networking stuff they’ve done for OpenStack in relation to the use of bare metal nodes.

Servers/Hardware

  • Mirantis published an article discussing a framework they built for bare-metal provisioning with OpenStack that allows OpenStack to place workloads onto bare-metal nodes instead of onto a hypervisor. It’s interesting work, but unfortunately it looks like this work won’t be returned to the community (it was developed for one or more of their clients). There are also a few follow-up posts, such as this one on placement control and multi-tenancy isolation and this one on preparing images for bare metal nodes. Also see the “Networking” section above for a related post on the networking aspects involved.

Security

I don’t have anything for this area this time around, but I’ll stay alert for articles to add next time. Feel free to share something in the comments!

Cloud Computing/Cloud Management

  • I might have mentioned this before, but Ken Pepple’s OpenStack Folsom architecture post is just awesome. It’s well worth reading and reviewing in depth.
  • This OpenStack-on-Debian HOWTO is a bit older (and probably out of date), but it does give a decent overview of the components that are involved and—via the configuration—how they relate to each other. While the details for installing a current version of OpenStack are likely to be different now, you might still find this conceptually helpful.
  • These articles are a bit long in the tooth, but CSS Corp has a useful series of articles on bundling various Linux distributions for use with OpenStack: bundling CentOS, bundling CentOS with VNC, bundling Debian, and bundling OpenSUSE. It would be interesting to me to see how much of this, if any, could be automated with something like Puppet. If any enterprise Puppet experts want to give it a go, I’d be happy to publish a guest blog post for you with full details on how it’s done.
  • Much like there are some great “how to’s” on how to run an SDN lab (see the Networking section earlier), there are also some great write-ups on doing the same for OpenStack. For example, Cody Bunch published this article on running OpenStack Private Cloud on ESXi, and Brent Salisbury (there he is again!) posted an older guide to OpenStack Essex on Ubuntu on VirtualBox as well as a newer guide to OpenStack DevStack on Fusion.

Operating Systems/Applications

Storage

  • I don’t fully understand all the details involved, but this post on changes in block protocol scalability in Xen outlines what sounds like good progress in improving efficiency.
  • This article is a bit older, published at the start of October, but it talks about an interesting project (product?) by Qlogic called “Mt. Rainier.” (Stu Miniman of Wikibon has more information here as well.) Apparently, “Mt. Rainier” will allow customers to combine PCIe-based SSD storage inside servers into a “virtual SAN” (now there’s an original and not over-used term). The really interesting aspect, in my opinion, is the use of “Mt. Rainier” to create shared caches across servers. Is this the beginning of the data center fractal edge?

Virtualization

  • Big news in the QEMU world: In the QEMU 1.3 release, the QEMU-KVM and QEMU projects have been merged. Why is this important? It’s first necessary to understand the relationship between QEMU and KVM. KVM is the set of kernel modules that leverage hardware virtualization functionality inside Intel and AMD CPUs, and it makes possible the virtualization of closed-source operating systems like Windows. QEMU, on the other hand, is needed to emulate everything else that a VM needs: networking, storage, USB, keyboard, mouse, etc. Both KVM and QEMU are needed for a full virtualization solution. Until the 1.3 release, QEMU (without hardware acceleration via KVM) was one branch, and QEMU-KVM (with KVM hardware acceleration) was a separate branch. The QEMU 1.3 release completes an effort to merge both efforts into a single development tree.
  • The merge of QEMU and QEMU-KVM isn’t the only cool thing happening with QEMU; also included in the 1.3 release is GlusterFS integration. This integration dramatically improves GlusterFS performance by allowing QEMU’s block layer to communicate directly with the Gluster backend without going through the userspace FUSE components.
  • Erik Scholten of VMGuru.nl has posted a good hypervisor feature comparison document. It includes RHEV 3.1 in the comparison, even though RHEV 3.1 wasn’t released (was still in beta) at the time the comparison was written.
  • Speaking of RHEV: apparently RHEV 3.1 was released yesterday (Wednesday, December 4, 2012), although I haven’t been able to find any sort of official press release or announcement.
  • Debunking an argument I’ve heard quite a bit is this article by Frank Denneman on using SIOC with multiple datastores backed by a single pool of disks.
  • Need to compact a virtual hard disk in Windows 8/Windows Server 2012? Ben Armstrong shows how here.
  • I enjoyed this article by Josh Townsend on using SUSE Studio and HAProxy to create a (free) open source load balancing solution for VMware View.

That’s it for this time around; no need to overwhelm you with too much information! Besides, I have to keep a few items around for Technology Short Take #28…

As always, comments, thoughts, rants, or corrections are welcome below.

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #26! As you might already know, the Technology Short Takes are my irregularly-published collections of links, articles, thoughts, and (sometimes) rants. I hope you find something useful here!

Networking

  • Chris Colotti, as part of a changed focus in his role at VMware, has been working extensively with Nicira NVP. He’s had a couple of good posts; this one is a primer on how NVP works, and this one discusses the use of the Open vSwitch (OVS) vApp. As I mentioned before in other posts, OVS is popping up in more and more places—it might be a good idea to make sure you’re familiar with it.
  • This article by Ivan Pepelnjak on VXLAN termination on physical devices is over a year old, but still very applicable—especially considering Arista Networks recently announced their 7150S switch, which sports hardware VTEP (VXLAN Tunnel End Point) support (meaning that it can terminate VXLAN segments).
  • Brad Hedlund dives into Midokura Midonet in this post on L2-L4 network virtualization. It’s a good overview (thanks Brad!) and worth reading if you want to get up to speed on what Midokura is doing. (Oh, just as an aside: note that Midokura leverages OVS in their solution. Just saying…)
  • This blog post provides more useful information from Kamau Wanguhu on VXLAN and proxy ARP. Kamau also has an interesting post on network virtualization, although—to be honest—the post is long on messaging/positioning and short on technical information. I prefer the latter instead of the former.

Servers/Hardware

  • This mention of the Dell PowerEdge M I/O Aggregator looks interesting, although I’m still not real clear on exactly what it is or how it works. I guess this first article was a tease?

Security

Nothing this time around, but I’ll stay alert for items to include in future posts!

Cloud Computing/Cloud Management

  • Want to know a bit more about how to configure VXLAN inside VCD? Rawlinson Rivera has a nice write-up that is worth reviewing.
  • Clint Kitson, an EMC vSpecialist, talks about some VCD integrity scripts he created. Looks like some pretty cool stuff—great work, Clint!
  • For the past couple of weeks I’ve been (slowly) reading Kevin Jackson’s OpenStack Cloud Computing Cookbook; it’s very useful. It’s worth a read if you want to get up to speed on OpenStack; naturally, you can get it from Amazon.

Operating Systems/Applications

  • At the intersection of cloud-based storage and configuration management, I happened to find this very interesting Puppet module designed to fetch and update files from an S3 bucket. Through this module, you could store files in S3 instead of using Puppet’s built-in file server. (By the way, this module also works with OpenStack Swift as well.)
  • One of the things I’ve complained about regarding newer versions of OS X is the “hiding” of the Unix underpinnings. Perhaps I should read this book and see if my thinking is unfounded?

Storage

  • Chris Evans takes a look at Hyper-V 3.0′s Virtual Fibre Channel feature in this write-up. From what I’ve read, it sounds like Hyper-V’s NPIV implementation is more robust than VMware’s broken and busted NPIV implementation. (If you don’t know why I say that about VMware’s implementation, ask anyone who’s tried to use it.) The real question is this: is NPIV support in a hypervisor of any value any longer?
  • Gina Minks (formerly of Dell, now with Inktank) recommended I have a look at Ceph and mentioned this post on migrating to Ceph (with a little libvirt thrown in).
  • Gluster might be another project that I need to spend some time examining; this post on using Gluster with oVirt 3.1 looks interesting. Anyone have any pointers for a Gluster beginner?
  • Mirantis has a post about some Nova Volume integration with Isilon. I’ve often said that I think scale-out platforms like Isilon (among others) are an important foundation for future storage solutions. It’s cool to see some third-party development happening to integrate Isilon and OpenStack.

Virtualization

That’s all for this time around. As always, courteous comments are welcome (encouraged, in fact!), so feel free to speak up in the comments below. I’d love to hear your feedback.

Tags: , , , , , , , , , , , , , ,

Back in early March I was invited to speak at the South Florida VMUG, and I gave this presentation on vSphere networking challenges and solutions. The idea behind the presentation was to give attendees some visibility into IEEE and IETF efforts at creating new network technologies and protocols. I’m posting it here just in case someone might find it useful or helpful.

As always, your questions, corrections, or clarifications are welcome in the comments below.

Tags: , , , ,

As I was reviewing my list of actions in OmniFocus this morning, I saw an action I’d added a while back to review RFC 4023. RFC 4023 is the RFC that defines MPLS-in-IP and MPLS-in-GRE encapsulations, and was written in 2005. I thought, “Let me just read this real quick.” So I did.

It’s not a terribly long RFC (only about 13 pages or so), but I’ll attempt to summarize it here. (Networking gurus, feel free to correct my summary if I get it wrong.) The basic idea behind RFC 4023 is to allow two MPLS-capable LSRs (an LSR is a label switching router) that are adjacent to each other with regard to a Label Switched Path (LSP) to communicate over an intermediate IP network that does not support MPLS. Essentially, it tunnels MPLS inside IP (or GRE).

“OK,” you say. “So what?”

Well, here’s my line of thinking:

  1. All networking experts agree that we need to move away from the massive layer 2 networks we’re building to support virtualized applications in enterprise data centers. (Case in point: see this post from Ivan on a layer 2 network being a single failure domain.)

  2. Networking experts also seem to agree that the ideal solution is IP-based at layer 3. It’s ubiquitous and well understood.

  3. However, layer 3 alone doesn’t provide the necessary isolation and multi-tenancy features that we all believe are necessary for cloud environments.

  4. Therefore, we need to provide some sort of additional isolation but also maintain layer 3 connectivity. Hence, the rise of protocols such as VXLAN and NVGRE, that isolate traffic using some sort of virtual network identifier (VNI) and wrap traffic inside IP (or GRE, as in the case of NVGRE).

It seems to me—and I freely admit that I could be mistaken, based on my limited knowledge of MPLS thus far—that MPLS-in-IP could accomplish the same thing: provide isolation between tenants and maintain layer 3 connectivity. Am I wrong? Why not build MPLS-in-IP endpoints (referred to in RFC 4023 as “tunnel head” and “tunnel tail”) directly into our virtualization hosts and build RFC 4023-style tunnels between them? Wouldn’t this solve the same problem that newer protocols such as VXLAN, NVGRE, and STT are attempting to solve, but with protocols that are already well-defined and understood?

Perhaps my understanding is incorrect. Help me understand—speak up in the comments!

Tags: , ,

Welcome to Technology Short Take #23, another collection of links and thoughts related to data center technologies like networking, storage, security, cloud computing, and virtualization. As usual, we have a fairly wide-ranging collection of items this time around. Enjoy!

Networking

  • A couple of days ago I learned that there are a couple open source implementations of LISP (Locator/ID Separation Protocol). There’s OpenLISP, which runs on FreeBSD, and there’s also a project called LISPmob that brings LISP to Linux. From what I can tell, LISPmob appears to be a bit more focused on the endpoint than OpenLISP.
  • In an earlier post on STT, I mentioned that STT’s re-use of the TCP header structure could cause problems with intermediate devices. It looks like someone has figured out how to allow STT through a Cisco ASA firewall; the configuration is here.
  • Jose Barreto posted a nice breakdown of SMB Multichannel, a bandwidth-enhancing feature of SMB 3.0 that will be included in Windows Server 2012. It is, unexpectedly, only supported between two SMB 3.0-capable endpoints (which, at this time, means two Windows Server 2012 hosts). Hopefully additional vendors will adopt SMB 3.0 as a network storage protocol. Just don’t call it CIFS!
  • Reading this article, you might deduce that Ivan really likes overlay/tunneling protocols. I am, of course, far from a networking expert, but I do have to ask: at what point does it become necessary (if ever) to move some of the intelligence “deeper” into the stack? Networking experts everywhere advocate the “complex edge-simple core” design, but does it ever make sense to move certain parts of the edge’s complexity into the core? Do we hamper innovation by insisting that the core always remain simple? As I said, I’m not an expert, so perhaps these are stupid questions.
  • Massimo Re Ferre posted a good article on a typical VXLAN use case. Read this if you’re looking for a more concrete example of how VXLAN could be used in a typical enterprise data center.
  • Bruce Davie of Nicira helps explain the difference between VPNs and network virtualization; this is a nice companion article to his colleague’s post (which Bruce helped to author) on the difference between network virtualization and software-defined networking (SDN).
  • The folks at Nicira also collaborated on this post regarding software overhead of tunneling. The results clearly favor STT (which was designed to take advantage of NIC offloading) over GRE, but the authors do admit that as “GRE awareness” is added to the cards that protocol’s performance will improve.
  • Oh, and while we’re on the topic of SDN…you might have noticed that VMware has taken to using the term “software-defined” to describe many of the services that vSphere (and related products) provide. This includes the use of software-defined networking (SDN) to describe the functionality of vSwitches, distributed vSwitches, vShield, and other features. Personally, I think that the term software-based networking (SBN) is far more applicable than SDN to what VMware does. It is just me?
  • Brad Hedlund wrote this post a few months ago, but I’m just now getting around to commenting about it. The gist of the article—forgive me if I munge it too much, Brad—is that the use of open source software components might dramatically change the shape/way/means in which networking protocols and standards are created and utilized. If two components are communicating over the network via open source components, is some sort of networking standard needed to avoid being “proprietary”? It’s an interesting thought, and goes to show the power of open source on the IT industry. Great post, Brad.
  • One more mention of OpenFlow/SDN: it’s great technology (and I’m excited about the possibilities that it creates), but it’s not a silver bullet for scalability.

Security

  • I came across this interesting post on a security attack based on VMDKs. It’s quite an interesting read, even if the probability of being able to actually leverage this attack vector is fairly low (as I understand it).

Storage

  • Chris Wahl has a good series on NFS with VMware vSphere. You can catch the start of the series here. One comment on the testing he performs in the “Same Subnet” article: if I’m not mistaken, I believe the VMkernel selection is based upon which VMkernel interface is listed in the first routing table entry for the subnet. This is something about which I wrote back in 2008, but I’m glad to see Chris bringing it to light again.
  • George Crump published this article on using DCB to enhance iSCSI. (Note: The article is quite favorable to Dell, and George discloses an affiliation with Dell at the end of the article.) One thing I did want to point out is that—if I recall correctly—the 802.1Qbb standard for Priority Flow Control only defines a single “no drop” class of service (CoS). Normally that CoS is assigned to FCoE traffic, but in an environment without FCoE you could assign it to iSCSI. In an environment with both, that could be a potential problem, as I see it. Feel free to correct me in the comments if my understanding is incorrect.
  • Microsoft is introducing data deduplication in Windows Server 2012, and here is a good post providing an introduction to Microsoft’s deduplication implementation.
  • SANRAD VXL looks interesting—anyone have any experience with it? Or more detailed technical information?
  • I really enjoyed Scott Drummonds’ recent storage performance analysis post. He goes pretty deep into some storage concepts and provides real-world, relevant information and recommendations. Good stuff.

Cloud Computing/Cloud Management

  • After moving CloudStack to the Apache Software Foundation, Citrix published this discourse on “open washing” and provides a set of questions to determine the “openness” of software projects with which you may become involved. While the article is clearly structured to favor Citrix and CloudStack, the underlying point—to understand exactly what “open source” means to your vendors—is valid and worth consideration.
  • Per the AWS blog, you can now export EC2 instances out of Amazon and into another environment, including VMware, Hyper-V, and Xen environments. I guess this kind of puts a dent in the whole “Hotel California” marketing play that some vendors have been using to describe Amazon.
  • Unless you’ve been hiding under a rock for the past few weeks, you’ve most likely heard about Nick Weaver’s Razor project. (If you haven’t heard about it, here’s Nick’s blog post on it.) To help with the adoption/use of Razor, Nick also recently announced an overview of the Razor API.

Virtualization

  • Frank Denneman continues to do a great job writing solid technical articles. The latest article to catch my eye (and I’m sure that I missed some) was this post on combining affinity rule types.
  • This is an interesting post on a vSphere 5 networking bug affecting iSCSI that was fixed in vSphere 5.0 Update 1.
  • Make a note of this VMware KB article regarding UDP traffic on Linux guests using VMXNET3; the workaround today is using E1000 instead.
  • This post is actually over a year old, but I just came across it: Luc Dekens posted a PowerCLI script that allows a user to find the maximum IOPS values over the last 5 minutes for a number of VMs. That’s handy. (BTW, I have fixed the error that kept me from seeing the post when it was first published—I’ve now subscribed to Luc’s blog.)
  • Want to use a Debian server to provide NFS for your VMware environment? Here is some information that might prove helpful.
  • Jeremy Waldrop of Varrow provides some information on creating a custom installation ISO for ESXi 5, Nexus 1000V, and PowerPath/VE. Cool!
  • Cormac Hogan continues to pump out some very useful storage-focused articles on the official VMware vSphere blog. For example, both the VMFS locking article and the article on extending an EagerZeroedThick disk were great posts. I sincerely hope that Cormac keeps up the great work.
  • Thanks to this Project Kronos page, I’ve been able to successfully set up XCP on Ubuntu Server 12.04 LTS. Here’s hoping it gets easier in future releases.
  • Chris Colotti takes on some vCloud Director “challenges”, mostly surrounding vShield Edge and vCloud Director’s reliance on vShield Edge for specific networking configurations. While I do agree with many of Chris’ points, I personally would disagree that using vSphere HA to protect vShield Edge is an acceptable configuration. I was also unable to find any articles that describe how to use vSphere FT to protect the deployed vShield appliances. Can anyone point out one or more of those articles? (Put them in the comments.)
  • Want to use Puppet to automate the deployment of vCenter Server? See here.

I guess it’s time to wrap up now, lest my “short take” get even longer than it already is! Thanks for reading this far, and I hope that I’ve shared something useful with you. Feel free to speak up in the comments if you have questions, thoughts, or clarifications.

Tags: , , , , , , , , , , , , , , , , ,

« Older entries