You are currently browsing articles tagged HyperV.
Welcome to Technology Short Take #36. In this episode, I’ll share a variety of links from around the web, along with some random thoughts and ideas along the way. I try to keep things related to the key technology areas you’ll see in today’s data centers, though I do stray from time to time. In any case, enough with the introduction—bring on the content! I hope you find something useful.
- This post is a bit older, but still useful in the event if you’re interested in learning more about OpenFlow and OpenFlow controllers. Nick Buraglio has put together a basic reference OpenFlow controller VM—this is a KVM guest with CentOS 6.3 with the Floodlight open source controller.
- Paul Fries takes on defining SDN, breaking it down into two “flavors”: host dominant and network dominant. This is a reasonable way of grouping the various approaches to SDN (using SDN in the very loose industry sense, not the original control plane-data plane separation sense). I’d like to add to Paul’s analysis that it’s important to understand that, in reality, host dominant and network dominant systems can coexist. It’s not at all unreasonable to think that you might have a fabric controller that is responsible for managing/optimizing traffic flows across the physical transport network/fabric, and an overlay controller—like VMware NSX—that integrates tightly with the hypervisor(s) and workloads running on those hypervisors to create and manage logical connectivity and logical network services.
- This is an older post from April 2013, but still useful, I think. In his article titled “OpenFlow Test Deployment Options“, Brent Salisbury—a rock star new breed network engineer emerging in the new world of SDN—discusses some practical deployment strategies for deploying OpenFlow into an existing network topology. One key statement that I really liked from this article was this one: “SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking.” New technologies don’t make talented folks who embrace change obsolete; if anything, these new technologies make them more valuable.
- Great post by Ivan (is there a post by Ivan that isn’t great?) on flow table explosion with OpenFlow. He does a great job of explaining how OpenFlow works and why OpenFlow 1.3 is needed in order to see broader adoption of OpenFlow.
- Intel announced the E5 2600 v2 series of CPUs back at Intel Developer Forum (IDF) 2013 (you can follow my IDF 2013 coverage by looking at posts with the IDF2013 tag). Kevin Houston followed up on that announcement with a useful post on vSphere compatibility with the E5 2600 v2. You can also get more details on the E5 2600 v2 itself in this related post by Kevin as well. (Although I’m just now catching Kevin’s posts, they were published almost immediately after the Intel announcements—thanks for the promptness, Kevin!)
Nothing this time around, but I’ll keep my eyes posted for content to share with you in future posts.
Cloud Computing/Cloud Management
- I found this refresher on some of the most useful apt-get/apt-cache commands to be helpful. I don’t use some of them on a regular basis, and so it’s hard to remember the specific command and/or syntax when you do need one of these commands.
- I wouldn’t have initially considered comparing Docker and Chef, but considering that I’m not an expert in either technology it could just be my limited understanding. However, this post on why Docker and why not Chef does a good job of looking at ways that Docker could potentially replace certain uses for Chef. Personally, I tend to lean toward the author’s final conclusions that it is entirely possible that we’ll see Docker and Chef being used together. However, as I stated, I’m not an expert in either technology, so my view may be incorrect. (I reserve the right to revise my view in the future.)
- Using Dell EqualLogic with VMFS? Better read this heads-up from Cormac Hogan and take the recommended action right away.
- Erwin van Londen proposes some ideas for enhancing FC error detection and notification with the idea of making hosts more aware of path errors and able to “route” around them. It’s interesting stuff; as Erwin points out, though, even if the T11 accepted the proposal it would be a while before this capability showed up in actual products.
- Libguestfs is an interesting project, and in the 1.24 release they added a tool called virt-builder that helps quickly and easily deploy VM images.
- Andre Leibovici is well-known for his insightful and informative coverage of Horizon View and related products/technologies, and rightfully so. As proof of that, I recently came across two articles by Andre, one on why CBRC is so important for Horizon View and VSAN, and a second on how VSAN helps Horizon View. Both of these are definitely worth reading if you want a bit more detail on how one of VMware’s newest technologies, VSAN, is going to impact the end-user computing (EUC) space.
- Speaking of VSAN, William Lam has a very useful article on the additional steps that are required to completely disable VSAN on an ESXi host. I indicate that this is useful because I anticipate many folks will want to try out VSAN in their lab first. Since you will likely then want to take it out of the lab and move it into production after the appropriate amount of testing and validation for your environment, this post is quite helpful.
- It’s kind of funny: I was doing a bit of reading on ZeroVM, a new open source lightweight hypervisor, trying to understand where it fits in the overall virtualization space. The very next day, I see the announcement that ZeroVM is being acquired by Rackspace. Anyone want to take guesses on when we’ll see ZeroVM support in OpenStack?
- Nice write-up by Gabrie Van Zanten on a potential connection issue with the VCSA in vSphere 5.5.
- See this post by Ben Armstrong on faster live migration in Hyper-V on Windows Server 2012 R2 (which is now generally available).
That’s it for this time around, but feel free to continue to conversation in the comments below. If you have any additional information to share regarding any of the topics I’ve mentioned, please take the time to add that information in the comments. Courteous comments are always welcome!
Tags: Automation, CLI, Hardware, HyperV, Linux, Networking, OpenFlow, OpenStack, Puppet, Security, Storage, Virtualization, VMware
Welcome to Technology Short Take #34, my latest collection of links, articles, thoughts, and ideas from around the web, centered on key data center technologies. Enjoy!
- Henry Louwers has a nice write-up on some of the design considerations that go into selecting a Citrix NetScaler solution.
- Scott Hogg explores jumbo frames and their benefits/drawbacks in a clear and concise manner. It’s worth reading if you aren’t familiar with jumbo frames and some of the considerations around their use.
- The networking “old guard” likes to talk about how x86 servers and virtualization create network bottlenecks due to performance concerns, but as Ivan points out in this post, it’s rapidly becoming—or has already become—a non-issue. (By the way, if you’re not already reading all of Ivan’s content, you need to be. Seriously.)
- Greg Ferro, aka EtherealMind, has a great series of articles on overlay networking (a component technology used in a number of network virtualization solutions). Greg starts out with a quick look at the value prop for overlay networking. In addition to highlighting one key value of overlay networking—that decoupling the logical network from the physical network enables more rapid change and innovation—Greg also establishes that overlay networking is not new. Greg continues with a more detailed look at how overlay networking works. Finally, Greg takes a look at whether overlay networking and the physical network should be integrated; he arrives at the conclusion that integrating the two is likely to be unsuccessful given the history of such attempts in the past.
- Terry Slattery ruminates on the power of creating (and using) the right abstraction in networking. The value of the “right abstraction” has come up a number of times; it was a featured discussion point of Martin Casado’s talk at the OpenStack Summit in Portland in April, and takes center stage in a recent post over at Network Heresy.
- Here’s a decent two-part series about running Vyatta on VMware Workstation (part 1 and part 2).
- Could we use OpenFlow to build better internet exchanges? Here’s one idea.
I have nothing to share this time around, but I’ll keep watch for content to include in future Technology Short Takes.
Cloud Computing/Cloud Management
- Tom Fojta takes a look at integrating vCloud Automation Center (vCAC) with vCloud Director in this post. (By the way, congrats to Tom on becoming the first VCDX-Cloud!)
- In case you missed it, here’s the recording for the #vBrownBag session with Jon Harris on vCAC. (I had the opportunity to hear Jon speak about his employer’s vCAC deployment and some of the lessons learned at a recent New Mexico VMUG meeting.)
- Rawlinson Rivera starts to address a lack of available information about Virsto in the first of a series of posts on VMware Virsto. This initial post provides an introduction to Virsto; future posts will provide more in-depth technical details (which is what I’m really looking forward to getting).
- Nigel Poulton talks a bit about target driven zoning, something I’ve mentioned before on this site. For more information on target driven zoning (also referred to as peer zoning), also be sure to check out Erik Smith’s blog.
- Now that he’s had some time to come up to speed in his new role, Frank Denneman has started a great series on the basic elements of PernixData’s Flash Virtualization Platform (FVP). You can read part 1 here and part 2 here. I’m looking forward to future parts in this series.
- I’d often wondered this myself, and now Cormac Hogan has the answer: why is uploading files to VMFS so slow? Good information.
It’s time to wrap up now, or this Technology Short Take is going to turn into a Technology Long Take. Anyway, I hope you found something useful in this little collection. If you have any feedback or suggestions for improvement for future posts, feel free to speak up in the comments below.
Tags: Automation, HyperV, Linux, Networking, OpenFlow, OpenStack, Puppet, Storage, vCloud, Virtualization, VMFS, VMware, vSphere, Xen
Welcome to Technology Short Take #33, the latest in my irregularly-published series of articles discussing various data center technology-related links, articles, rants, thoughts, and questions. I hope that you find something useful here. Enjoy!
- Tom Nolle asks the question, “Is virtualization reality even more elusive than virtual reality?” It’s a good read; the key thing that I took away from it was that SDN, NFV, and related efforts are great, but what we really need is something that can pull all these together in a way that customers (and providers) reap the benefits.
- What happens when multiple VXLAN logical networks are mapped to the same multicast group? Venky explains it in this post. Venky also has a great write-up on how the VTEP (VXLAN Tunnel End Point) learns and creates the forwarding table.
- This post by Ranga Maddipudi shows you how to use App Firewall in conjunction with VXLAN logical networks.
- Jason Edelman is on a roll with a couple of great blog posts. First up, Jason goes off on a rant about network virtualization, briefly hitting topics like the relationship between overlays and hardware, the role of hardware in network virtualization, the changing roles of data center professionals, and whether overlays are the next logical step in the evolution of the network. I particularly enjoyed the snippet from the post by Bill Koss. Next, Jason dives a bit deeper on the relationship between network overlays and hardware, and shares his thoughts on where it does—and doesn’t—make sense to have hardware terminating overlay tunnels.
- Another post by Tom Nolle explores the relationship—complicated at times—between SDN, NFV, and the cloud. Given that we define the cloud (sorry to steal your phrase, Joe) as elastic, pooled resources with self-service functionality and ubiquitous access, I can see where Tom states that to discuss SDN or NFV without discussing cloud is silly. On the flip side, though, I have to believe that it’s possible for organizations to make a gradual shift in their computing architectures and processes, so one almost has to discuss these various components individually, because to tie them all together makes it almost impossible. Thoughts?
- If you haven’t already introduced yourself to VXLAN (one of several draft protocols used as an overlay protocol), Cisco Inferno has a reasonable write-up.
- I know Steve Jin, and he’s a really smart guy. I must disagree with some of his statements regarding what software-defined networking is and is not and where it fits, written back in April. I talked before about the difference between network virtualization and SDN, so no need to mention that again. Also, the two key flaws that Steve identifies—single point of failure and scalability—aren’t flaws with SDN/network virtualization, but rather flaws in an implementation of said technologies, IMHO.
- Correction from the last Technology Short Take—I incorrectly stated that the HP Moonshot offerings were ARM-based, and therefore wouldn’t support vSphere. I was wrong. The servers (right now, at least) are running Intel Atom S1260 CPUs, which are x86-based and do offer features like Intel VT-x. Thanks to all who pointed this out, and my apologies for the error!
- I missed this on the #vBrownBag series: designing HP Virtual Connect for vSphere 5.x.
Cloud Computing/Cloud Management
- Hyper-V as hypervisor with OpenStack Compute? Sure, see here.
- Cody Bunch, who has been focusing quite a bit on OpenStack recently, has a nice write-up on using Razor and Chef to automate an OpenStack build. Part 1 is here; part 2 is here. Good stuff—keep it up, Cody!
- I’ve mentioned in some of my OpenStack presentations (see SpeakerDeck or Slideshare) that a great place to start if you’re just getting started is DevStack. Here, Brent Salisbury has a nice write-up on using DevStack to install OpenStack Grizzly.
- Boxen, a tool created by GitHub to manage their OS X Mountain Lion laptops for developers, looks interesting. Might be a useful tool for other environments, too.
- If you use TextMate2 (I switched to BBEdit a little while ago after being a long-time TextMate user), you might enjoy this quick post by Colin McNamara on Puppet syntax highlighting using TextMate2.
- Anyone have more information on Jeda Networks? They’ve been mentioned a couple of times on GigaOm (here and here), but I haven’t seen anything concrete yet. Hey, Stephen Foskett, if you’re reading: get Jeda Networks to the next Tech Field Day.
- Tim Patterson shares some code from Luc Dekens that helps check VMFS version and block sizes using PowerCLI. This could come in quite handy in making sure you know how your datastores are configured, especially if you are in the midst of a migration or have inherited an environment from someone else.
- Interested in using SAML and Horizon Workspace with vCloud Director? Tom Fojta shows you how.
- If you aren’t using vSphere Host Profiles, this write-up on the VMware SMB blog might convince you why you should and show you how to get started.
- Michael Webster tackles the question: is now the best time to upgrade to vSphere 5.1? Read the full post to see what Michael has to say about it.
- Duncan points out an easy error to make when working with vSphere HA heartbeat datastores in this post. Key takeaway: sometimes the fix is a lot simpler than we might think at first. (I know I’m guilty of making things more complicated than they need to be at times. Aren’t we all?)
- Jon Benedict (aka “Captain KVM”) shares a script he wrote to help provide high availability for RHEV-M.
- Chris Wahl has a nice write-up on using log shipping to protect your vCenter database. It’s a bit over a year old (surprised I missed it until now), and—as Chris points out—log shipping doesn’t protect the database (primary and secondary copies) against corruption. However, it’s better than nothing (which I suspect it what far too many people are using).
- If you aspire to be a writer—whether that be a blogger, author, journalist, or other—you might find this article on using the DASH method for writing to be helpful. The six tips at the end of the article are especially helpful, I think.
Time to wrap this up for now; the rest will have to wait until the next Technology Short Take. Until then, feel free to share your thoughts, questions, or rants in the comments below. Courteous comments are always welcome!
Tags: Automation, Hardware, HP, HyperV, Linux, Macintosh, Networking, OpenStack, Puppet, SDN, vCloud, Virtualization, vSphere, VXLAN, Writing
Welcome to Technology Short Take #29! This is another installation in my irregularly-published series of links, thoughts, rants, and raves across various data center-related fields of technology. As always, I hope you find something useful here.
- Who out there has played around with Mininet yet? Looks like this is another tool I need to add to my toolbox as I continue to explore networking technologies like OpenFlow, Open vSwitch, and others.
- William Lam has a recent post on some useful VXLAN commands found in ESXCLI with vSphere 5.1. I’m a CLI fan, so I like this sort of stuff.
- I still have a lot to learn about OpenFlow and networking, but this article from June of last year (it appears to have been written by Ivan Pepelnjak) discusses some of the potential scalability concerns around early versions of the OpenFlow protocol. In particular, the use of OpenFlow to perform granular per-flow control when there are thousands (or maybe only hundreds) of flows presents a scalability challenge (for now, at least). In my mind, this isn’t an indictment of OpenFlow, but rather an indictment of the way that OpenFlow is being used. I think that’s the point Ivan tried to make as well—it’s the architecture and how OpenFlow is used that makes a difference. (Is that a reasonable summary, Ivan?)
- Brad Hedlund (who will be my co-worker starting on 2/11) created a great explanation of network virtualization that clearly breaks down the components and explains their purpose and function. Great job, Brad.
- One of the things I like about Open vSwitch (OVS) is that it is so incredibly versatile. Case in point: here’s a post on using OVS to connect LXC containers running on different hosts via GRE tunnels. Handy!
- Cisco UCS is pretty cool in that it makes automation of compute hardware easier through such abstractions as server profiles. Now, you can also automate UCS with Chef. I traded a few tweets with some Puppet folks, and they indicated they’re looking at this as well.
- Speaking of Puppet and hardware, I also saw a mention on Twitter about a Puppet module that will manage the configuration of a NetApp filer. Does anyone have a URL with more information on that?
- Continuing the thread on configuration management systems running on non-compute hardware (I suppose this shouldn’t be under the “Servers/Hardware” section any longer!), I also found references to running CFEngine on network apliances and running Chef on Arista switches. That’s kind of cool. What kind of coolness would result from even greater integration between an SDN controller and a declarative configuration management tool? Hmmm…
- Want full-disk encryption in Ubuntu, using AES-XTS-PLAIN64? Here’s a detailed write-up on how to do it.
- In posts and talks I’ve given about personal productivity, I’ve spoken about the need to minimize “friction,” that unspoken drag that makes certain tasks or workflows more difficult and harder to adopt. Tal Klein has a great post on how friction comes into play with security as well.
Cloud Computing/Cloud Management
- If you, like me, are constantly on the search for more quality information on OpenStack and its components, then you’ll probably find this post on getting Cinder up and running to be helpful. (I did, at least.)
- Mirantis—recently the recipient of $10 million in funding from various sources—posted a write-up in late November 2012 on troubleshooting some DNS and DHCP service configuration issues in OpenStack Nova. The post is a bit specific to work Mirantis did in integrating an InfoBlox appliance into OpenStack, but might be useful in other situation as well.
- I found this article on Packstack, a tool used to transform Fedora 17/18, CentOS 6, or RHEL 6 servers into a working OpenStack deployment (Folsom). It seems to me that lots of people understand that getting an OpenStack cloud up and running is a bit more difficult than it should be, and are therefore focusing efforts on making it easier.
- DevStack is another proof point of the effort going into make it easier to get OpenStack up and running, although the focus for DevStack is on single-host development environments (typically virtual themselves). Here’s one write-up on DevStack; here’s another one by Cody Bunch, and yet another one by the inimitable Brent Salisbury.
- If you’re interested in learning Puppet, there are a great many resources out there; in fact, I’ve already mentioned many of them in previous posts. I recently came across these Example42 Puppet Tutorials. I haven’t had the chance to review them myself yet, but it looks like they might be a useful resource as well.
- Speaking of Puppet, the
puppet-lint tool is very handy for ensuring that your Puppet manifest syntax is correct and follows the style guidelines. The tool has recently been updated to help fix issues as well. Read here for more information.
- Greg Schulz (aka StorageIO) has a couple of VMware storage tips posts you might find useful reading. Part 1 is here, part 2 is here. Enjoy!
- Amar Kapadia suggests that adding LTFS to Swift might create an offering that could give AWS Glacier a real run for the money.
- Gluster interests me. Perhaps it shouldn’t, but it does. For example, the idea of hosting VMs on Gluster (similar to the setup described here) seems quite interesting, and the work being done to integrate KVM/QEMU with Gluster also looks promising. If I can ever get my home lab into the right shape, I’m going to do some testing with this. Anyone done anything with Gluster?
- Erik Smith has a very informative write-up on why FIP snooping is important when using FCoE.
- Via this post on ten useful OpenStack Swift features, I found this page on how to build the “Swift All in One,” a useful VM for learning all about Swift.
- There’s no GUI for it, but it’s kind of cool that you can indeed create VM anti-affinity rules in Hyper-V using PowerShell. This is another example of how Hyper-V continues to get more competent. Ignore Microsoft and Hyper-V at your own risk…
- Frank Denneman takes a quick look at using user-defined NetIOC network resource pools to isolate and protect IP-based storage traffic from within the guest (i.e., using NFS or iSCSI from within the guest OS, not through the VMkernel). Naturally, this technique could be used to “protect” or “enhance” other types of important traffic flows to/from your guest OS instances as well.
- Andre Leibovici has a brief write-up on the PowerShell module for the Nicira Network Virtualization Platform (NVP). Interesting stuff…
- This write-up by Falko Timme on using BoxGrinder to create virtual appliances for KVM was interesting. I might have to take a look at BoxGrinder and see what it’s all about.
- In case you hadn’t heard, OVF 2.0 has been announced/released by the DMTF. Winston Bumpus of VMware’s Office of the CTO has more information in this post. I also found the OVF 2.0 frequently asked questions (FAQs) to be helpful. Of course, the real question is how long it will be before vendors add support for OVF 2.0, and how extensive that support actually is.
And that’s it for this time around! Feel free to share your thoughts, suggestions, clarifications, or corrections in the comments below. I encourage your feedback, and thanks for reading.
Tags: Automation, Cisco, CLI, Encryption, HyperV, KVM, Linux, Microsoft, NetApp, Networking, OpenStack, Puppet, Storage, UCS, Virtualization, VMware
Welcome to Technology Short Take #28, the first Technology Short Take for 2013. As always, I hope that you find something useful or informative here. Enjoy!
- Ivan Pepelnjak recently wrote a piece titled “Edge and Core OpenFlow (and why MPLS is not NAT)”. It’s an informative piece—Ivan’s stuff is always informative—but what really drew my attention was his mention of a paper by Martin Casado, Teemu Koponen, and others that calls for a combination of MPLS and OpenFlow (and an evolution of OpenFlow into “edge” and “core” versions) to build next-generation networks. I’ve downloaded the paper and intend to review it in more detail. I’d love to hear from any networking experts who’ve read the paper—what are your thoughts?
- Speaking of Ivan…it also appears that he’s quite pleased with Microsoft’s implementation of NVGRE in Hyper-V. Sounds like some of the other vendors need to get on the ball.
- Here’s a nice explanation of CloudStack’s physical networking architecture.
- The first fruits of Brad Hedlund’s decision to join VMware/Nicira have shown up in this joint article by Brad, Bruce Davie, and Martin Casado describing the role of network virutalization in the software-defined data center. (It doesn’t matter how many times I say or write “software-defined data center,” it still feels like a marketing term.) This post is fairly high-level and abstract; I’m looking forward to seeing more detailed and in-depth posts in the future.
- Art Fewell speculates that the networking industry has “lost our way” and become a “big bag of protocols” in this article. I do agree with one of the final conclusions that Fewell makes in his article: that SDN (a poorly-defined and often over-used term) is the methodology of cloud computing applied to networking. Therefore, SDN is cloud networking. That, in my humble opinion, is a more holistic and useful way of looking at SDN.
- It appears that the vCloud Connector posts (here and here) that (apparently) incorrectly identify VXLAN as a component/prerequisite of vCloud Connector have yet to be corrected. (Hat tip to Kenneth Hui at VCE.)
Nothing this time around, but I’ll watch for content to include in future posts.
- Here’s a link to a brief (too brief, in my opinion, but perhaps I’m just being overly critical) post on KVM virtualization security, authored by Dell TechCenter. It provides some good information on securing the libvirt communication channel.
Cloud Computing/Cloud Management
- Long-time VMware users probably remember Mike DiPetrillo, whose website has now, unfortunately, gone offline. I mention this because I’ve had this article on RabbitMQ AMQP with vCloud Director sitting in my list of “articles to write about” for a while, but some of the images were missing and I couldn’t find a link for the article. I finally found a link to a reprinted version of the article on DZone Enterprise Integration. Perhaps the article will be of some use to someone.
- Sam Johnston talks about reliability in the cloud with a discussion on the merits of “reliable software” (software designed for failure) vs. “unreliable software” (more traditional software not designed for failure). It’s a good article, but I found the discussion between Sam and Massimo (of VMware) as equally useful.
- Want some good details on the space-efficient sparse disk format in vSphere 5.1? Andre Leibovici has you covered right here.
- Read this article for good information from Andre on a potential timeout issue with recomposing desktops and using the View Storage Accelerator (aka context-based read cache, CRBC).
- Apparently Cormac Hogan, aka @VMwareStorage on Twitter, hasn’t gotten the memo that “best practices” is now outlawed. He should have named this series on NFS with vSphere “NFS Recommended Practices”, but even misnamed as they are, the posts still have useful information. Check out part 1, part 2, and part 3.
- If you’d like to get a feel for how VMware sees the future of flash storage in vSphere environments, read this.
- This is a slightly older post, but informative and useful nevertheless. Cormac posted an article on VAAI offloads and KAVG latency when observed in
esxtop. The summary of the article is that the commands
esxtop is tracking are internal to the ESXi kernel only; therefore, abnormal KAVG values do not represent any sort of problem. (Note there’s also an associated VMware KB article.)
- More good information from Cormac here on the use of the SunRPC.MaxConnPerIP advanced setting and its impact on NFS mounts and NFS connections.
- Another slightly older article (from September 2012) is this one from Frank Denneman on how vSphere 5.1 handles parallel Storage vMotion operations.
- A fellow IT pro contacted me on Twitter to see if I had any idea why some shares on his Windows Server VM weren’t working. As it turns out, the problem is related to hotplug functionality; the OS sees the second drive as “removable” due to hotplug functionality, and therefore shares don’t work. The problem is outlined in a bit more detail here.
- William Lam outlines how to use new tagging functionality in
esxcli in vSphere 5.1 for more comprehensive scripted configurations. The new tagging functionality—if I’m reading William’s write-up correctly—means that you can configure VMkernel interfaces for any of the supported traffic types via
- Chris Wahl has a nice write-up on the behavior of Network I/O Control with multi-NIC vMotion traffic. It was pointed out in the comments that the behavior Chris describes is documented, but the write-up is still handy, and an important factor to keep in mind in your designs.
I suppose I should end it here, before this “short take” turns into a “long take”! In any case, courteous comments are always welcome, so if you have additional information, clarifications, or corrections to share regarding any of the articles or links in this post, feel free to speak up below.
Tags: Automation, HyperV, KVM, Macintosh, Microsoft, Networking, NFS, Security, Storage, Virtualization, VMotion, VMware, vSphere, VXLAN
Welcome to Technology Short Take #23, another collection of links and thoughts related to data center technologies like networking, storage, security, cloud computing, and virtualization. As usual, we have a fairly wide-ranging collection of items this time around. Enjoy!
- A couple of days ago I learned that there are a couple open source implementations of LISP (Locator/ID Separation Protocol). There’s OpenLISP, which runs on FreeBSD, and there’s also a project called LISPmob that brings LISP to Linux. From what I can tell, LISPmob appears to be a bit more focused on the endpoint than OpenLISP.
- In an earlier post on STT, I mentioned that STT’s re-use of the TCP header structure could cause problems with intermediate devices. It looks like someone has figured out how to allow STT through a Cisco ASA firewall; the configuration is here.
- Jose Barreto posted a nice breakdown of SMB Multichannel, a bandwidth-enhancing feature of SMB 3.0 that will be included in Windows Server 2012. It is, unexpectedly, only supported between two SMB 3.0-capable endpoints (which, at this time, means two Windows Server 2012 hosts). Hopefully additional vendors will adopt SMB 3.0 as a network storage protocol. Just don’t call it CIFS!
- Reading this article, you might deduce that Ivan really likes overlay/tunneling protocols. I am, of course, far from a networking expert, but I do have to ask: at what point does it become necessary (if ever) to move some of the intelligence “deeper” into the stack? Networking experts everywhere advocate the “complex edge-simple core” design, but does it ever make sense to move certain parts of the edge’s complexity into the core? Do we hamper innovation by insisting that the core always remain simple? As I said, I’m not an expert, so perhaps these are stupid questions.
- Massimo Re Ferre posted a good article on a typical VXLAN use case. Read this if you’re looking for a more concrete example of how VXLAN could be used in a typical enterprise data center.
- Bruce Davie of Nicira helps explain the difference between VPNs and network virtualization; this is a nice companion article to his colleague’s post (which Bruce helped to author) on the difference between network virtualization and software-defined networking (SDN).
- The folks at Nicira also collaborated on this post regarding software overhead of tunneling. The results clearly favor STT (which was designed to take advantage of NIC offloading) over GRE, but the authors do admit that as “GRE awareness” is added to the cards that protocol’s performance will improve.
- Oh, and while we’re on the topic of SDN…you might have noticed that VMware has taken to using the term “software-defined” to describe many of the services that vSphere (and related products) provide. This includes the use of software-defined networking (SDN) to describe the functionality of vSwitches, distributed vSwitches, vShield, and other features. Personally, I think that the term software-based networking (SBN) is far more applicable than SDN to what VMware does. It is just me?
- Brad Hedlund wrote this post a few months ago, but I’m just now getting around to commenting about it. The gist of the article—forgive me if I munge it too much, Brad—is that the use of open source software components might dramatically change the shape/way/means in which networking protocols and standards are created and utilized. If two components are communicating over the network via open source components, is some sort of networking standard needed to avoid being “proprietary”? It’s an interesting thought, and goes to show the power of open source on the IT industry. Great post, Brad.
- One more mention of OpenFlow/SDN: it’s great technology (and I’m excited about the possibilities that it creates), but it’s not a silver bullet for scalability.
- I came across this interesting post on a security attack based on VMDKs. It’s quite an interesting read, even if the probability of being able to actually leverage this attack vector is fairly low (as I understand it).
- Chris Wahl has a good series on NFS with VMware vSphere. You can catch the start of the series here. One comment on the testing he performs in the “Same Subnet” article: if I’m not mistaken, I believe the VMkernel selection is based upon which VMkernel interface is listed in the first routing table entry for the subnet. This is something about which I wrote back in 2008, but I’m glad to see Chris bringing it to light again.
- George Crump published this article on using DCB to enhance iSCSI. (Note: The article is quite favorable to Dell, and George discloses an affiliation with Dell at the end of the article.) One thing I did want to point out is that—if I recall correctly—the 802.1Qbb standard for Priority Flow Control only defines a single “no drop” class of service (CoS). Normally that CoS is assigned to FCoE traffic, but in an environment without FCoE you could assign it to iSCSI. In an environment with both, that could be a potential problem, as I see it. Feel free to correct me in the comments if my understanding is incorrect.
- Microsoft is introducing data deduplication in Windows Server 2012, and here is a good post providing an introduction to Microsoft’s deduplication implementation.
- SANRAD VXL looks interesting—anyone have any experience with it? Or more detailed technical information?
- I really enjoyed Scott Drummonds’ recent storage performance analysis post. He goes pretty deep into some storage concepts and provides real-world, relevant information and recommendations. Good stuff.
Cloud Computing/Cloud Management
- After moving CloudStack to the Apache Software Foundation, Citrix published this discourse on “open washing” and provides a set of questions to determine the “openness” of software projects with which you may become involved. While the article is clearly structured to favor Citrix and CloudStack, the underlying point—to understand exactly what “open source” means to your vendors—is valid and worth consideration.
- Per the AWS blog, you can now export EC2 instances out of Amazon and into another environment, including VMware, Hyper-V, and Xen environments. I guess this kind of puts a dent in the whole “Hotel California” marketing play that some vendors have been using to describe Amazon.
- Unless you’ve been hiding under a rock for the past few weeks, you’ve most likely heard about Nick Weaver’s Razor project. (If you haven’t heard about it, here’s Nick’s blog post on it.) To help with the adoption/use of Razor, Nick also recently announced an overview of the Razor API.
- Frank Denneman continues to do a great job writing solid technical articles. The latest article to catch my eye (and I’m sure that I missed some) was this post on combining affinity rule types.
- This is an interesting post on a vSphere 5 networking bug affecting iSCSI that was fixed in vSphere 5.0 Update 1.
- Make a note of this VMware KB article regarding UDP traffic on Linux guests using VMXNET3; the workaround today is using E1000 instead.
- This post is actually over a year old, but I just came across it: Luc Dekens posted a PowerCLI script that allows a user to find the maximum IOPS values over the last 5 minutes for a number of VMs. That’s handy. (BTW, I have fixed the error that kept me from seeing the post when it was first published—I’ve now subscribed to Luc’s blog.)
- Want to use a Debian server to provide NFS for your VMware environment? Here is some information that might prove helpful.
- Jeremy Waldrop of Varrow provides some information on creating a custom installation ISO for ESXi 5, Nexus 1000V, and PowerPath/VE. Cool!
- Cormac Hogan continues to pump out some very useful storage-focused articles on the official VMware vSphere blog. For example, both the VMFS locking article and the article on extending an EagerZeroedThick disk were great posts. I sincerely hope that Cormac keeps up the great work.
- Thanks to this Project Kronos page, I’ve been able to successfully set up XCP on Ubuntu Server 12.04 LTS. Here’s hoping it gets easier in future releases.
- Chris Colotti takes on some vCloud Director “challenges”, mostly surrounding vShield Edge and vCloud Director’s reliance on vShield Edge for specific networking configurations. While I do agree with many of Chris’ points, I personally would disagree that using vSphere HA to protect vShield Edge is an acceptable configuration. I was also unable to find any articles that describe how to use vSphere FT to protect the deployed vShield appliances. Can anyone point out one or more of those articles? (Put them in the comments.)
- Want to use Puppet to automate the deployment of vCenter Server? See here.
I guess it’s time to wrap up now, lest my “short take” get even longer than it already is! Thanks for reading this far, and I hope that I’ve shared something useful with you. Feel free to speak up in the comments if you have questions, thoughts, or clarifications.
Tags: Citrix, FCoE, HyperV, iSCSI, Linux, LISP, Microsoft, Networking, NFS, Security, Storage, vCloud, Virtualization, VMware, vSphere, VXLAN, Windows, Xen
I just finished reading a post on ZDNet titled “Are Hyper-V and App-V the new Windows Servers?” in which the author—Ken Hess—postulates that the rise of virtualization will shape the future of the Microsoft Windows OS such that, in his words:
The Server OS itself is an application. It’s little more than (or hopefully a little less than) Server Core.
The author also advises his readers that they “have to learn a new vocabulary” and that they’ll “deploy services and applications as workloads.”
Does any of this sound familiar to you?
It should. Almost 6 years ago, I was carrying on a blog conversation (with a web site that is now defunct) about the future of the OS. I speculated at that point that the general-purpose OS as we then knew it would be gone within 5 to 10 years. It looks like that prediction might be reasonably accurate. (Sadly, I was horribly wrong about Mac OS X, but everyone’s allowed to be wrong now and then aren’t they?)
It should further sound familiar because almost 5 years ago, Srinivas Krishnamurti of VMware wrote an article describing a new (at the time) concept. This new concept was the idea of a carefully trimmed operating system (OS) instance that served as an application container:
By ripping out the operating system interfaces, functions, and libraries and automatically turning off the unnecessary services that your application does not require, and by tailoring it to the needs of the application, you are now down to a lithe, high performing, secure operating system – Just Enough of the Operating System, that is, or JeOS.
The idea of the server OS as an application container—what Ken suggests in very Microsoft-centric terms in his article—is not a new idea, but it is good to see those outside of the VMware space opening their eyes to the possibilities that a full-blown general purpose OS might not be the best answer anymore. Whether it is Microsoft’s technology or VMware’s technology that drives this innovation is a topic for another post, but it is pretty clear to me that this innovation is already occurring and will continue to occur.
The OS is dead, long live the OS!
<aside>If this is the case—and I believe that it is—what does this portend for massive OS upgrades such as Windows 8 (and Server 2012)?</aside>
Tags: HyperV, Microsoft, Virtualization, VMware, vSphere, Windows
Yesterday I posted an article regarding SR-IOV support in the next release of Hyper-V, and I commented in that article that I hoped VMware added SR-IOV support to vSphere. A couple of readers commented about why I felt SR-IOV support was important, what the use cases might be, and what the potential impacts could be to the vSphere networking environment. Those are all excellent questions, and I wanted to take the time to discuss them in a bit more detail than simply a response to a blog comment.
First, it’s important to point out—and this was stated in John Howard’s original series of posts to which I linked; in particular, this post—that SR-IOV is a PCI standard; therefore, it could potentially be used with any PCI device that supports SR-IOV. While we often discuss this in the networking context, it’s equally applicable in other contexts, including the HBA/CNA space. Maybe it’s just because in my job at EMC I see some interesting things that might never see the light of day (sorry, can’t say any more!), but I could definitely see the use for the ability to have multiple virtual HBAs/CNAs in an ESXi host. Think about the ability to pass an HBA/CNA VF (virtual function) up to a guest operating system on a host, and what sorts of potential advantages that might give you:
- The ability to zone on a per-VM basis
- Per-VM (more accurate, per-initiator) visibility into storage traffic and storage trends
Of course, this sort of model is not without drawbacks: in its current incarnation, assigning PCI devices to VMs breaks vMotion. But is that limitation a byproduct of the current way it’s being done, and would SR-IOV help alleviate that potential concern or issue? It sounds like Microsoft has found a way to leverage SR-IOV for NIC assignment without sacrificing live migration support (see John’s latest SR-IOV post). I suspect that bringing SR-IOV awareness into the hypervisor—and potentially into the guest OS via each vendor’s paravirtualized device drivers, aka VMware Tools in a vSphere context—might go a long way to helping address the live migration concerns with direct device assignment. Of course, I’m not a developer or a programmer, so feel free to (courteously!) correct me in the comments.
Are there use cases beyond providing virtual HBAs/CNAs? Here are a couple questions to get you thinking:
- Could you potentially leverage a single PCI fax board among multiple VMs (clearly you’d have to manage fax board capacity) to virtualize your fax servers?
- Would the presentation of virtual GPUs to a guest OS eliminate the need for a paravirtualized video driver, and would the lack of a paravirtualized video driver streamline the virtualization layer even more? The same goes for virtual NICs.
I’m not saying that all these things are possible—again, I’m not a developer so I could be way off base—but it seems to me that SR-IOV at least enables us to consider these sorts of options.
Regarding networking, this is where I see a lot of potential for SR-IOV. While VMware’s networking code is highly optimized, the movement of Ethernet switching into hardware on a NIC that supports SR-IOV has got to free up some CPU cycles and virtualization overhead. It also seems to me that putting that Ethernet switching on an SR-IOV NIC and then adding 802.1Qbg (EVB/VEPA) support would be a sweet combination. Mix in a hypervisor-to-NIC control plane for dynamically provisioning SR-IOV VFs and you’ve got a solution where provisioning a VM on a host dynamically creates an SR-IOV VF, attaches it to the VM, and uses EVB to provision a new VLAN on-demand onto that NIC. Is that a “pie in the sky” dream scenario? I’m not so sure that it’s that far off.
What do you think? Please share your thoughts in the comments below. Where applicable, please provide disclosure. For example, I work for EMC, but I speak for myself.
Tags: Hardware, HyperV, Microsoft, Virtualization, VMware, vSphere
While browsing my list of RSS feeds tonight, I came across a series of articles by John Howard, a senior program manager on the Hyper-V team at Microsoft. The post was one of a series of posts describing SR-IOV support in the next version of Hyper-V, found in Windows “8″. I hadn’t heard that Microsoft was adding SR-IOV support to the next version of Hyper-V, so when I saw that I was surprised. Personally, I think SR-IOV support is a big deal (see the note at the end of this post for why).
If you’re not familiar with SR-IOV, I suggest you read this quick SR-IOV tutorial I published on this site in late 2009.
Here are the links to John’s SR-IOV in Hyper-V posts:
Everything you wanted to know about SR-IOV in Hyper-V, part 1
Everything you wanted to know about SR-IOV in Hyper-V, part 2
Everything you wanted to know about SR-IOV in Hyper-V, part 3
Everything you wanted to know about SR-IOV in Hyper-V, part 4
Everything you wanted to know about SR-IOV in Hyper-V, part 5
It’s great to see Microsoft adding SR-IOV support to Hyper-V; this brings SR-IOV out of the niche Linux market and into a broader, more mainstream market. This also applies some competitive pressure against market leader VMware, who now has to respond in some fashion—either by adding SR-IOV support to their ESXi hypervisor, or by explaining why SR-IOV support isn’t necessary. Personally, I hope that VMware does the former and not the latter.
(By the way, for those of you wondering why SR-IOV is important, there are lots of potential synergies here—in my view, at least—between hardware switching on an SR-IOV NIC and things like software-defined networking.)
Tags: Hardware, HyperV, Microsoft, Networking, Virtualization
Welcome to Technology Short Take #18! I hope you find something useful in this collection of networking, OS, storage, and virtualization links. Enjoy!
The number of articles in my “Networking” bucket continues to overflow; I have so many articles on so many topics (soft switching, OpenFlow, Open vSwitch, MPLS) that it’s hard to get my head wrapped around all of it. Here are a few posts that stuck out to me:
- Ivan Pepelnjak has a very well-written post explaining the various ways that virtual networking can be decoupled from the physical network.
- I stumbled across a trio of articles by Denton Gentry on hash tables (part 1, part 2, and part 3). This is an interesting perspective I hadn’t considered before; as we move more into software-defined networks (SDNs), why are we continuing to use the same mechanisms as before? Why not take advantage of more efficient mechanisms as part of this transition?
- Nigel Poulton and I traded a few tweets during HP Discover Vienna about SCSI Express (or SCSI over PCIe, SoP). He wrote up his thoughts about SoP and its future in the storage industry here. Further Twitter-based discussions about fabrics led him to say that HP buying Xsigo would bring the competition back against UCS. I’m not so sure I agree. Xsigo’s server fabric technology/product is interesting, but it seems to me that it’s still adding layers of abstraction that aren’t necessary. As SR-IOV, MR-IOV, and PCIe extension matures, it seems to me that Ethernet as the fabric is going to win. If that’s the case, and HP wants to bring the hurt against UCS, they’re going to have to invest in Ethernet-based fabrics.
- Speaking of UCS, here’s a “how to” on deploying the UCS Platform Emulator on vSphere. You might also like the UCS PE configuration follow-up post.
- Here’s what looks to be a handy Mac OS X utility to track how long until your Active Directory password expires. Sounds simple, yes, but useful.
- Jason Boche, after some collaboration with Bob Plankers, wrote up a good procedure for expanding the vCloud Director Transfer Server storage space. It’s definitely worth a read if you’re going to be working with vCloud Director.
- Microsoft has released version 3.2 of the Linux Integration Services for Hyper-V. The new release adds integrated mouse support, updated network drivers, and fixes an issue with SCVMM compatibility.
- Julian Wood, who I had the opportunity to meet in Copenhagen at VMworld 2011, has published a four-part series on managing vSphere 5 certificates. Follow these links for the series: part 1, part 2, part 3, and part 4.
- Thinking of deploying Oracle on vSphere? You should probably read this three-part series from VMware’s Business Critical Applications blog: part 1 is here, part 2 is here, and part 3 is here.
- I’m so used to dealing with VLANs in a vSphere environment, I didn’t consider the challenges that might come up when using them with VMware Workstation. Fortunately, this author did—read his post on mapping VLANs to VMnets in VMware Workstation.
- I thought that this article on virtual disks with business critical applications would be a deep dive on which virtual disk formats (thin, lazy zeroed, eager zeroed) are best suited for various applications. While the article does discuss the different virtual disk formats, unfortunately that’s as far as it goes.
- Fellow VMware vSphere Design co-author Forbes Guthrie highlights an important design concern with AutoDeploy: what about a virtual vCenter instance? Read his full article for the in-depth discussion.
- This post by William Lam gives a good overview of when vSphere MoRefs change (or don’t change).
- Here’s a good explanation why NIC teaming can’t be used with iSCSI binding.
- Cormac Hogan also posted a nice overview of some new
vmkfstools enhancements in vSphere 5.
- Terence Luk posts a detailed procedure to help recover VMware Site Recovery Manager in the event of a failure of one of the SRM servers. Good information—thanks Terence!
And that’s it for this time around. Feel free to add your thoughts in the comments below—all comments are welcome! (Please provide full disclosure of vendor affiliations/employment where applicable. Thanks!)
Tags: HyperV, Macintosh, Networking, Storage, UCS, vCloud, VDI, Virtualization, vSphere