Microsoft

This category contains posts that discuss Microsoft products, services, or technologies.

Welcome to Technology Short Take #42, another installation in my ongoing series of irregularly published collections of news, items, thoughts, rants, raves, and tidbits from around the Internet, with a focus on data center-related technologies. Here’s hoping you find something useful!

Networking

  • Anthony Burke’s series on VMware NSX continues with part 5.
  • Aaron Rosen, a Neutron contributor, recently published a post about a Neutron extension called Allowed-Address-Pairs and how you can use it to create high availability instances using VRRP (via keepalived). Very cool stuff, in my opinion.
  • Bob McCouch has a post over at Network Computing (where I’ve recently started blogging as well—see my first post) discussing his view on how software-defined networking (SDN) will trickle down to small and mid-sized businesses. He makes comparisons among server virtualization, 10 Gigabit Ethernet, and SDN, and feels that in order for SDN to really hit this market it needs to be “not a user-facing feature, but rather a means to an end” (his words). I tend to agree—focusing on SDN is focusing on the mechanism, rather than focusing on the problems the mechanism can address.
  • Want or need to use multiple external networks in your OpenStack deployment? Lars Kellogg-Stedman shows you how in this post on multiple external networks with a single L3 agent.

Servers/Hardware

  • There was some noise this past week about Cisco UCS moving into the top x86 blade server spot for North America in Q1 2014. Kevin Houston takes a moment to explore some ideas why Cisco was so successful in this post. I agree that Cisco had some innovative ideas in UCS—integrated management and server profiles come to mind—but my biggest beef with UCS right now is that it is still primarily a north/south (server-to-client) architecture in a world where east/west (server-to-server) traffic is becoming increasingly critical. Can UCS hold on in the face of a fundamental shift like that? I don’t know.

Security

  • Need to scramble some data on a block device? Check out this command. (I love the commandlinefu.com site. It reminds me that I still have so much yet to learn.)

Cloud Computing/Cloud Management

  • Want to play around with OpenDaylight and OpenStack? Brent Salisbury has a write-up on how to OpenStack Icehouse (via DevStack) together with OpenDaylight.
  • Puppet Labs has released a module that allows users to programmatically (via Puppet) provision and configure Google Compute Platform (GCP) instances. More details are available in the Puppet Labs blog post.
  • I love how developers come up with these themes around certain projects. Case in point: “Heat” is the name of the project for orchestrating resources in OpenStack, HOT is the name for the format of Heat templates, and Flame is the name of a new project to automatically generate Heat templates.

Operating Systems/Applications

  • I can’t imagine that anyone has been immune to the onslaught of information on Docker, but here’s an article that might be helpful if you’re still looking for a quick and practical introduction.
  • Many of you are probably familiar with Razor, the project that former co-workers Nick Weaver and Tom McSweeney created when they were at EMC. Tom has since moved on to CSC (via the vCHS team at VMware) and has launched a “next-generation” version of Razor called Hanlon. Read more about Hanlon and why this is a new/separate project in Tom’s blog post here.
  • Looking for a bit of clarity around CoreOS and Project Atomic? I found this post by Major Hayden to be extremely helpful and informative. Both of these projects are on my radar, though I’ll probably focus on CoreOS first as the (currently) more mature solution.
  • Linux Journal has a nice multi-page write-up on Docker containers that might be useful if you are still looking to understand Docker’s basic building blocks.
  • I really enjoyed Donnie Berkholz’ piece on microservices and the migrating Unix philosophy. It was a great view into how composability can (and does) shift over time. Good stuff, I highly recommend reading it.
  • cURL is an incredibly useful utility, especially in today’s age of HTTP-based REST API. Here’s a list of 9 uses for cURL that are worth knowing. This article on testing REST APIs with cURL is handy, too.
  • And for something entirely different…I know that folks love to beat up AppleScript, but it’s cross-application tasks like this that make it useful.

Storage

  • Someone recently brought the open source Open vStorage project to my attention. Open vStorage compares itself to VMware VSAN, but supporting multiple storage backends and supporting multiple hypervisors. Like a lot of other solutions, it’s implemented as a VM that presents NFS back to the hypervisors. If anyone out there has used it, I’d love to hear your feedback.
  • Erik Smith at EMC has published a series of articles on “virtual storage networks.” There’s some interesting content there—I haven’t finished reading all of the posts yet, as I want to be sure to take the time to digest them properly. If you’re interested, I suggest starting out with his introductory post (which, strangely enough, wasn’t the first post in the series), then moving on to part 1, part 2, and part 3.

Virtualization

  • Did you happen to see this write-up on migrating a VMware Fusion VM to VMware’s vCloud Hybrid Service? For now—I believe there are game-changing technologies out there that will alter this landscape—one of the very tangible benefits of vCHS is its strong interoperability with your existing vSphere (and Fusion!) workloads.
  • Need a listing of the IP addresses in use by the VMs on a given Hyper-V host? Ben Armstrong shares a bit of PowerShell code that produces just such a listing. As Ben points out, this can be pretty handy when you’re trying to track down a particular VM.
  • vCenter Log Insight 2.0 was recently announced; Vladan Seget has a decent write-up. I’m thinking of putting this into my home lab soon for gathering event information from VMware NSX, OpenStack, and the underlying hypervisors. I just need more than 24 hours in a day…
  • William Lam has an article on lldpnetmap, a little-known utility for mapping ESXi interfaces to physical switches. As the name implies, this relies on LLDP, so switches that don’t support LLDP or that don’t have LLDP enabled won’t work correctly. Still, a useful utility to have in your toolbox.
  • Technology previews of the next versions of Fusion (Fusion 7) and Workstation (Workstation 11) are available; see Eric Sloof’s articles (here and here for Fusion and Workstation, respectively) for more details.
  • vSphere 4 (and associated pieces) are no longer under general support. Sad face, but time stops for no man (or product).
  • Having some problems with VMware Fusion’s networking? Cody Bunch channels his inner Chuck Norris to kick VMware Fusion networking in the teeth.
  • Want to preview OS X Yosemite? Check out William Lam’s guide to using Fusion or vSphere to preview the new OS X beta release.

I’d better wrap this up now, or it’s going to turn into one of Chad’s posts. (Just kidding, Chad!) Thanks for taking the time to read this far!

Tags: , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!

Networking

  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.

Servers/Hardware

  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.

Security

  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the OpenNebula.org site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications

Storage

Virtualization

  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

This is session EDCS008, “Virtualizing the Network to Enable a Software-Defined Infrastructure (SDI).” The speakers are Brian Johnson (@thehevy on Twitter) from Intel and Jim Pinkerton from Microsoft. Brian is a Solution Architect; Jim is a Windows Server Architect. If you’ve ever been in one of Brian’s presentations, you know he does a great job of really diving deep in some of this stuff. (Can you tell I’m a fan?)

Brian starts the session with a review of how the data center has evolved over the last 10 years or so, driven by the widespread adoption of compute virtualization, increased CPU capacity, and the adoption of 10Gb Ethernet. This naturally leads to a discussion of software-defined networking (SDN) as a means whereby the network can evolve to keep up the rapid pace of change and innovation in other areas of the data center. Why is this a big deal? Brian draws the comparison between property management and how IT is shaping:

  • A rental house is pretty easy to manage. One tenant, infrequent change, long-term investments.
  • An apartment means more tenants, but still relatively infrequent change.
  • A hotel means lots of tenants and the ability to handle frequent change and lots of room turnover.

The connection here is VMs—we’re now running lots of VMs, and the VMs change regularly. The infrastructure needs to be ready to handle this rapid pace of change.

At this point, Jim Pinkerton of Microsoft takes over to discuss how Windows Server thinks about this issue and these challenges. According to Jim, the world has moved beyond virtualization—it now needs the ability to scale and secure many workloads cost-effectively. You need greater automation, and you need to support any type of application. Jim talks about private clouds, hosting (IaaS-type services), and public clouds. He points out that MTTR (Mean Time to Repair) is a more important metric than MTBF (Mean Time Between Failures).

Driven by how the data center is evolving (the points in the previous paragraph), the network needs to be evolved:

  • Deliver networking as part of a pooled, automated infrastructure
  • Ensure multitenant isolation, scale, and performance
  • Expand data center capacity seamlessly as per business needs
  • Reduce operational complexity

Out of these design principles comes SDN, according to Pinkerton. Key attributes of SDN, according to Microsoft, are flexibility, control, and automation. At this point Pinkerton digresses into a discussion of SMB3 and its performance characteristics over 10Gb Ethernet—which, frankly, is completely unrelated to the topic of the presentation. After a few slides of discussing SMB3 with very little relevance to the rest of the discussion, Pinkerton moves back into a discussion of the virtual switch found in Windows Server 2012 R2.

Brian now takes over again, focusing on virtual switch performance and behavior. East-west traffic between VMs can hit 60–70Gbps, because it all happens inside the server. How do we maintain that traffic performance when we see east-west traffic between servers? We can deploy more interfaces, which is commonly seen. Moving to 10Gb Ethernet is another solution. Intel needed to add features to their network controllers—features like stateless offloads, virtual machine queues, and SR-IOV support—in order to drive performance for multiple 10Gb Ethernet interfaces. SR-IOV can help address some performance and utilization concerns, but this presents a problem when working with network virtualization. If you’re bypassing the hypervisor, how do you get on the virtual network?

Brian leaves this question for now to talk about how network virtualization with overlays helps address some of the network provisioning concerns that exist today. He provides an example of how using overlays—he uses NVGRE, since this is a joint presentation with Microsoft—can allow tenants (customers, internal business units, etc.) to share private address spaces and eliminate many manual VLAN configuration tasks. He makes the point that network virtualization is possible without SDN, but SDN makes it much easier and simpler to manage and implement network virtualization.

One drawback of overlays is that many network interface cards (NICs) today don’t “understand” the overlays, and therefore can’t perform certain hardware offloads that help optimize traffic and utilization. However, Brian shows a next-gen Intel NIC that will understand network overlays and will be able to perform offloads (like LSO, RSS, and VMQ) on encapsulated traffic.

This leads Brian to a discussion of Intel Open Network Platform (ONP), which encompasses two aspects:

  1. Intel ONP Switch reference design (aka “Seacliff Trail”), which leverages Intel silicon to support SDN and network Virtualization
  2. Intel ONP Server reference design, which shows how to optimize virtual switching using Intel’s Data Plane Development Kit (DPDK)

The Intel ONP Server reference design (sorry, can’t remember the code name) actually uses Open vSwitch (OVS) as a core part of its design.

Intel ONP includes something called FlexPipe (this is part of the Intel FM6700 chipset) to enable faster innovation and quicker support for encapsulation protocols (like NVGRE, VXLAN, and whatever might come next). The Intel ONP Switch supports serving as a bridge to connect physical workloads into virtual networks that are encapsulated, and being able to do this at full line rate using 40Gbps uplinks.

At this point, Brian and Jim wrap up the session and open up for questions and answers.

Tags: , , , ,

Welcome to Technology Short Take #28, the first Technology Short Take for 2013. As always, I hope that you find something useful or informative here. Enjoy!

Networking

  • Ivan Pepelnjak recently wrote a piece titled “Edge and Core OpenFlow (and why MPLS is not NAT)”. It’s an informative piece—Ivan’s stuff is always informative—but what really drew my attention was his mention of a paper by Martin Casado, Teemu Koponen, and others that calls for a combination of MPLS and OpenFlow (and an evolution of OpenFlow into “edge” and “core” versions) to build next-generation networks. I’ve downloaded the paper and intend to review it in more detail. I’d love to hear from any networking experts who’ve read the paper—what are your thoughts?
  • Speaking of Ivan…it also appears that he’s quite pleased with Microsoft’s implementation of NVGRE in Hyper-V. Sounds like some of the other vendors need to get on the ball.
  • Here’s a nice explanation of CloudStack’s physical networking architecture.
  • The first fruits of Brad Hedlund’s decision to join VMware/Nicira have shown up in this joint article by Brad, Bruce Davie, and Martin Casado describing the role of network virutalization in the software-defined data center. (It doesn’t matter how many times I say or write “software-defined data center,” it still feels like a marketing term.) This post is fairly high-level and abstract; I’m looking forward to seeing more detailed and in-depth posts in the future.
  • Art Fewell speculates that the networking industry has “lost our way” and become a “big bag of protocols” in this article. I do agree with one of the final conclusions that Fewell makes in his article: that SDN (a poorly-defined and often over-used term) is the methodology of cloud computing applied to networking. Therefore, SDN is cloud networking. That, in my humble opinion, is a more holistic and useful way of looking at SDN.
  • It appears that the vCloud Connector posts (here and here) that (apparently) incorrectly identify VXLAN as a component/prerequisite of vCloud Connector have yet to be corrected. (Hat tip to Kenneth Hui at VCE.)

Servers/Hardware

Nothing this time around, but I’ll watch for content to include in future posts.

Security

  • Here’s a link to a brief (too brief, in my opinion, but perhaps I’m just being overly critical) post on KVM virtualization security, authored by Dell TechCenter. It provides some good information on securing the libvirt communication channel.

Cloud Computing/Cloud Management

  • Long-time VMware users probably remember Mike DiPetrillo, whose website has now, unfortunately, gone offline. I mention this because I’ve had this article on RabbitMQ AMQP with vCloud Director sitting in my list of “articles to write about” for a while, but some of the images were missing and I couldn’t find a link for the article. I finally found a link to a reprinted version of the article on DZone Enterprise Integration. Perhaps the article will be of some use to someone.
  • Sam Johnston talks about reliability in the cloud with a discussion on the merits of “reliable software” (software designed for failure) vs. “unreliable software” (more traditional software not designed for failure). It’s a good article, but I found the discussion between Sam and Massimo (of VMware) as equally useful.

Operating Systems/Applications

Storage

  • Want some good details on the space-efficient sparse disk format in vSphere 5.1? Andre Leibovici has you covered right here.
  • Read this article for good information from Andre on a potential timeout issue with recomposing desktops and using the View Storage Accelerator (aka context-based read cache, CRBC).
  • Apparently Cormac Hogan, aka @VMwareStorage on Twitter, hasn’t gotten the memo that “best practices” is now outlawed. He should have named this series on NFS with vSphere “NFS Recommended Practices”, but even misnamed as they are, the posts still have useful information. Check out part 1, part 2, and part 3.
  • If you’d like to get a feel for how VMware sees the future of flash storage in vSphere environments, read this.

Virtualization

  • This is a slightly older post, but informative and useful nevertheless. Cormac posted an article on VAAI offloads and KAVG latency when observed in esxtop. The summary of the article is that the commands esxtop is tracking are internal to the ESXi kernel only; therefore, abnormal KAVG values do not represent any sort of problem. (Note there’s also an associated VMware KB article.)
  • More good information from Cormac here on the use of the SunRPC.MaxConnPerIP advanced setting and its impact on NFS mounts and NFS connections.
  • Another slightly older article (from September 2012) is this one from Frank Denneman on how vSphere 5.1 handles parallel Storage vMotion operations.
  • A fellow IT pro contacted me on Twitter to see if I had any idea why some shares on his Windows Server VM weren’t working. As it turns out, the problem is related to hotplug functionality; the OS sees the second drive as “removable” due to hotplug functionality, and therefore shares don’t work. The problem is outlined in a bit more detail here.
  • William Lam outlines how to use new tagging functionality in esxcli in vSphere 5.1 for more comprehensive scripted configurations. The new tagging functionality—if I’m reading William’s write-up correctly—means that you can configure VMkernel interfaces for any of the supported traffic types via esxcli. Neat.
  • Chris Wahl has a nice write-up on the behavior of Network I/O Control with multi-NIC vMotion traffic. It was pointed out in the comments that the behavior Chris describes is documented, but the write-up is still handy, and an important factor to keep in mind in your designs.

I suppose I should end it here, before this “short take” turns into a “long take”! In any case, courteous comments are always welcome, so if you have additional information, clarifications, or corrections to share regarding any of the articles or links in this post, feel free to speak up below.

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #23, another collection of links and thoughts related to data center technologies like networking, storage, security, cloud computing, and virtualization. As usual, we have a fairly wide-ranging collection of items this time around. Enjoy!

Networking

  • A couple of days ago I learned that there are a couple open source implementations of LISP (Locator/ID Separation Protocol). There’s OpenLISP, which runs on FreeBSD, and there’s also a project called LISPmob that brings LISP to Linux. From what I can tell, LISPmob appears to be a bit more focused on the endpoint than OpenLISP.
  • In an earlier post on STT, I mentioned that STT’s re-use of the TCP header structure could cause problems with intermediate devices. It looks like someone has figured out how to allow STT through a Cisco ASA firewall; the configuration is here.
  • Jose Barreto posted a nice breakdown of SMB Multichannel, a bandwidth-enhancing feature of SMB 3.0 that will be included in Windows Server 2012. It is, unexpectedly, only supported between two SMB 3.0-capable endpoints (which, at this time, means two Windows Server 2012 hosts). Hopefully additional vendors will adopt SMB 3.0 as a network storage protocol. Just don’t call it CIFS!
  • Reading this article, you might deduce that Ivan really likes overlay/tunneling protocols. I am, of course, far from a networking expert, but I do have to ask: at what point does it become necessary (if ever) to move some of the intelligence “deeper” into the stack? Networking experts everywhere advocate the “complex edge-simple core” design, but does it ever make sense to move certain parts of the edge’s complexity into the core? Do we hamper innovation by insisting that the core always remain simple? As I said, I’m not an expert, so perhaps these are stupid questions.
  • Massimo Re Ferre posted a good article on a typical VXLAN use case. Read this if you’re looking for a more concrete example of how VXLAN could be used in a typical enterprise data center.
  • Bruce Davie of Nicira helps explain the difference between VPNs and network virtualization; this is a nice companion article to his colleague’s post (which Bruce helped to author) on the difference between network virtualization and software-defined networking (SDN).
  • The folks at Nicira also collaborated on this post regarding software overhead of tunneling. The results clearly favor STT (which was designed to take advantage of NIC offloading) over GRE, but the authors do admit that as “GRE awareness” is added to the cards that protocol’s performance will improve.
  • Oh, and while we’re on the topic of SDN…you might have noticed that VMware has taken to using the term “software-defined” to describe many of the services that vSphere (and related products) provide. This includes the use of software-defined networking (SDN) to describe the functionality of vSwitches, distributed vSwitches, vShield, and other features. Personally, I think that the term software-based networking (SBN) is far more applicable than SDN to what VMware does. It is just me?
  • Brad Hedlund wrote this post a few months ago, but I’m just now getting around to commenting about it. The gist of the article—forgive me if I munge it too much, Brad—is that the use of open source software components might dramatically change the shape/way/means in which networking protocols and standards are created and utilized. If two components are communicating over the network via open source components, is some sort of networking standard needed to avoid being “proprietary”? It’s an interesting thought, and goes to show the power of open source on the IT industry. Great post, Brad.
  • One more mention of OpenFlow/SDN: it’s great technology (and I’m excited about the possibilities that it creates), but it’s not a silver bullet for scalability.

Security

  • I came across this interesting post on a security attack based on VMDKs. It’s quite an interesting read, even if the probability of being able to actually leverage this attack vector is fairly low (as I understand it).

Storage

  • Chris Wahl has a good series on NFS with VMware vSphere. You can catch the start of the series here. One comment on the testing he performs in the “Same Subnet” article: if I’m not mistaken, I believe the VMkernel selection is based upon which VMkernel interface is listed in the first routing table entry for the subnet. This is something about which I wrote back in 2008, but I’m glad to see Chris bringing it to light again.
  • George Crump published this article on using DCB to enhance iSCSI. (Note: The article is quite favorable to Dell, and George discloses an affiliation with Dell at the end of the article.) One thing I did want to point out is that—if I recall correctly—the 802.1Qbb standard for Priority Flow Control only defines a single “no drop” class of service (CoS). Normally that CoS is assigned to FCoE traffic, but in an environment without FCoE you could assign it to iSCSI. In an environment with both, that could be a potential problem, as I see it. Feel free to correct me in the comments if my understanding is incorrect.
  • Microsoft is introducing data deduplication in Windows Server 2012, and here is a good post providing an introduction to Microsoft’s deduplication implementation.
  • SANRAD VXL looks interesting—anyone have any experience with it? Or more detailed technical information?
  • I really enjoyed Scott Drummonds’ recent storage performance analysis post. He goes pretty deep into some storage concepts and provides real-world, relevant information and recommendations. Good stuff.

Cloud Computing/Cloud Management

  • After moving CloudStack to the Apache Software Foundation, Citrix published this discourse on “open washing” and provides a set of questions to determine the “openness” of software projects with which you may become involved. While the article is clearly structured to favor Citrix and CloudStack, the underlying point—to understand exactly what “open source” means to your vendors—is valid and worth consideration.
  • Per the AWS blog, you can now export EC2 instances out of Amazon and into another environment, including VMware, Hyper-V, and Xen environments. I guess this kind of puts a dent in the whole “Hotel California” marketing play that some vendors have been using to describe Amazon.
  • Unless you’ve been hiding under a rock for the past few weeks, you’ve most likely heard about Nick Weaver’s Razor project. (If you haven’t heard about it, here’s Nick’s blog post on it.) To help with the adoption/use of Razor, Nick also recently announced an overview of the Razor API.

Virtualization

  • Frank Denneman continues to do a great job writing solid technical articles. The latest article to catch my eye (and I’m sure that I missed some) was this post on combining affinity rule types.
  • This is an interesting post on a vSphere 5 networking bug affecting iSCSI that was fixed in vSphere 5.0 Update 1.
  • Make a note of this VMware KB article regarding UDP traffic on Linux guests using VMXNET3; the workaround today is using E1000 instead.
  • This post is actually over a year old, but I just came across it: Luc Dekens posted a PowerCLI script that allows a user to find the maximum IOPS values over the last 5 minutes for a number of VMs. That’s handy. (BTW, I have fixed the error that kept me from seeing the post when it was first published—I’ve now subscribed to Luc’s blog.)
  • Want to use a Debian server to provide NFS for your VMware environment? Here is some information that might prove helpful.
  • Jeremy Waldrop of Varrow provides some information on creating a custom installation ISO for ESXi 5, Nexus 1000V, and PowerPath/VE. Cool!
  • Cormac Hogan continues to pump out some very useful storage-focused articles on the official VMware vSphere blog. For example, both the VMFS locking article and the article on extending an EagerZeroedThick disk were great posts. I sincerely hope that Cormac keeps up the great work.
  • Thanks to this Project Kronos page, I’ve been able to successfully set up XCP on Ubuntu Server 12.04 LTS. Here’s hoping it gets easier in future releases.
  • Chris Colotti takes on some vCloud Director “challenges”, mostly surrounding vShield Edge and vCloud Director’s reliance on vShield Edge for specific networking configurations. While I do agree with many of Chris’ points, I personally would disagree that using vSphere HA to protect vShield Edge is an acceptable configuration. I was also unable to find any articles that describe how to use vSphere FT to protect the deployed vShield appliances. Can anyone point out one or more of those articles? (Put them in the comments.)
  • Want to use Puppet to automate the deployment of vCenter Server? See here.

I guess it’s time to wrap up now, lest my “short take” get even longer than it already is! Thanks for reading this far, and I hope that I’ve shared something useful with you. Feel free to speak up in the comments if you have questions, thoughts, or clarifications.

Tags: , , , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #22! Once again, I find myself without too many articles to share with you this time around. I guess that will make things a bit easier for you, the reader, but it does make me question whether or not I’m “listening” to the right communities. If any readers have suggestions on sources of information to which I should be subscribing or I should be following, I’d love to hear your suggestions.

In any case, let’s get into the meat of it. I hope you find something useful!

Networking

Security

  • I have to agree with Tom Hollingsworth that we often create backdoors by design simply out of our own laziness. I’ve heard it said—in fact I may have used the statement myself—that no amount of security can fix stupidity. That might be a bit strong, but it does apply to the “shortcuts” that we create for ourselves or our customers in our designs.

Servers/Hardware

  • Kevin Houston (who works for Dell) posted an article about a recent test report comparing power usage between Dell blades and Cisco UCS blades. If you’re comparing these two solutions, find a comparable report from Cisco and then draw your own conclusions. (Always get multiple views on a topic like this, because every vendor—and I know because I work for a vendor, too—will spin the report in their favor.)

Virtualization

That’s it for this time around. I hope that you have found something useful here. If anyone has any suggestions for sites/forums they’ve found helpful with data center-focused topics, I’d love for you to add that information in the comments.

Tags: , , , , , , , ,

I just finished reading a post on ZDNet titled “Are Hyper-V and App-V the new Windows Servers?” in which the author—Ken Hess—postulates that the rise of virtualization will shape the future of the Microsoft Windows OS such that, in his words:

The Server OS itself is an application. It’s little more than (or hopefully a little less than) Server Core.

The author also advises his readers that they “have to learn a new vocabulary” and that they’ll “deploy services and applications as workloads.”

Does any of this sound familiar to you?

It should. Almost 6 years ago, I was carrying on a blog conversation (with a web site that is now defunct) about the future of the OS. I speculated at that point that the general-purpose OS as we then knew it would be gone within 5 to 10 years. It looks like that prediction might be reasonably accurate. (Sadly, I was horribly wrong about Mac OS X, but everyone’s allowed to be wrong now and then aren’t they?)

It should further sound familiar because almost 5 years ago, Srinivas Krishnamurti of VMware wrote an article describing a new (at the time) concept. This new concept was the idea of a carefully trimmed operating system (OS) instance that served as an application container:

By ripping out the operating system interfaces, functions, and libraries and automatically turning off the unnecessary services that your application does not require, and by tailoring it to the needs of the application, you are now down to a lithe, high performing, secure operating system – Just Enough of the Operating System, that is, or JeOS.

The idea of the server OS as an application container—what Ken suggests in very Microsoft-centric terms in his article—is not a new idea, but it is good to see those outside of the VMware space opening their eyes to the possibilities that a full-blown general purpose OS might not be the best answer anymore. Whether it is Microsoft’s technology or VMware’s technology that drives this innovation is a topic for another post, but it is pretty clear to me that this innovation is already occurring and will continue to occur.

The OS is dead, long live the OS!

<aside>If this is the case—and I believe that it is—what does this portend for massive OS upgrades such as Windows 8 (and Server 2012)?</aside>

Tags: , , , , ,

Welcome to Technology Short Take #17, another of my irregularly-scheduled collections of various data center technology-related links, thoughts, and comments. Here’s hoping you find something useful!

Networking

  • I think it was J Metz of Cisco that posted this to Twitter, but this is a good reference to the various 10 Gigabit Ethernet modules.
  • I’ve spoken quite a bit about stretched clusters and their potential benefits. For an opposing view—especially regarding the use of stretched clusters as a disaster avoidance solution—check out this article. It’s a nice counterpoint, especially from the perspective of the network.
  • Anyone know anything about sFlow?
  • Here’s a good post on VXLAN that has some useful information. I’d just like to point out that VXLAN is really only intended to address Layer 2 communications “within” a vApp or a collection of VMs (perhaps a single organization’s VMs), and doesn’t do anything to address Layer 3 routing/accessibility for clients (or “consumers”) attempting to connect to those systems. For that, you’ll still need—at least today—technologies like OTV, LISP, and others.
  • A quick thought that I’m still exploring: what’s the impact of OpenFlow on technologies like VXLAN, NVGRE, and others? Does SDN eliminate the need for these technologies? I’d be curious to hear your thoughts.

Servers/Operating Systems

  • If you’ve adopted Mac OS X Lion 10.7, you might have noticed some problems connecting to older servers/NAS devices running AFP (AppleTalk Filing Protocol). This Apple KB article describes a fix. Although I’m running Snow Leopard now, I was running Lion on a new MacBook Pro and I can attest that this fix does work.
  • This Microsoft KB article describes how to extend the Windows Server 2008 evaluation period. I’ve found this useful for Windows Server 2008 instances in the lab that I need for longer 60 days but that I don’t necessarily want to activate (because they are transient).

Storage

  • Jason Boche blogged about a way to remove stubborn hosts from Unisphere. I’ve personally never seen this problem, but it’s nice to know how to address it should it occur.
  • Who would’ve thought that an HDD could serve as a cache for an SSD? Shouldn’t it be the other way around? Normally, that would probably be the case, but as described here there are certain instances and ways in which using an HDD as a cache for an SSD can improve performance.
  • Scott Drummonds wraps up his 3 part series on flash storage in part 3, which contains information on sizing flash storage. If you haven’t been reading this series, I’d recommend giving it a look.
  • Scott also weighs in on the flash as SSD vs. flash on PCIe discussion. I’d have to agree that interfaces are important, and the ability of the industry to successfully leverage flash on the PCIe bus is (today) fairly limited.
  • Henri updated his VNXe blog series with a new post on EFD and RR performance. No real surprises here, although I do have one question for Henri: is that your car in the blog header?

Virtualization

  • Interested in setting up host-only networking on VMware Fusion 4? Here’s a quick guide.
  • Kenneth Bell offers up some quick guidelines on when to deploy MCS versus PVS in a XenDesktop environment. MCS vs. PVS is a topic of some discussion on the vSpecialist mailing list as they have very different IOPs requirements and I/O profiles.
  • Speaking of VDI, Andre Leibovici has two articles that I wanted to point out. First, Andre does a deep dive on Video RAM in VMware View 5 with 3D; this has tons of good information that is useful for a VDI architect. (The note about the extra .VSWP overhead, for example, is priceless.) Andre also has a good piece on VDI and Microsoft Outlook that’s worth reading, laying out the various options for Outlook-related storage. If you want to be good at VDI, Andre is definitely a great resource to follow.
  • Running Linux in your VMware vSphere environment? If you haven’t already, check out Bob Plankers’ Linux Virtual Machine Tuning Guide for some useful tips on tuning Linux in a VM.
  • Seen this page?
  • You’ve probably already heard about Nick Weaver’s new “Uber” tool, a new VM alignment tool called UBERAlign. This tool is designed to address VM alignment, a problem with how guest file systems are formatted within a VMDK. For more information, see Nick’s announcement here.
  • Don’t disable DRS when you’re using vCloud Director. It’s as simple as that. (If you want to know why, read Chris Colotti’s post.)
  • Here’s a couple of great diagrams by Hany Michael on vCloud Director management pods (both public cloud and private cloud management).
  • People automatically assume that “virtualization” means consolidating multiple workloads onto a single physical server. However, virtualization is really just a layer of abstraction, and that layer of abstraction can be used in a variety of ways. I spoke about this in early 2010. This article (written back in March of 2011) by Brad Hedlund picks up on that theme to show another way that virtualization—or, as he calls it, “inverse virtualization”—can be applied to today’s data centers and today’s applications.
  • My discussion on the end of the infrastructure engineer generated some conversations, which is good. One of the responses was by Aaron Sweemer in which he discusses the new (but not new) “data layer” and expresses a need for infrastructure engineers to be aware of this data layer. I’d agree with a general need for all infrastructure engineers to be aware of the layers above them in the stack; I’m just not convinced that we all need to become application developers.
  • Here’s a great post by William Lam on the missing piece to creating your own vSEL cloud. I’ll tell you, William blogs some of the coolest stuff…I wish I could dig in as deep as he does in some of this stuff.
  • Here’s a nice look at the use of PowerCLI to help with the automation of DRS rules.
  • One of my projects for the upcoming year is becoming more knowledgeable and conversant with the open source Xen hypervisor and Citrix XenServer. I think that the XenServer Design Handbook is going to be a useful resource for that project.
  • Interested in more information on deploying Oracle databases on vSphere? Michael Webster, aka @vcdxnz001 on Twitter, has a lengthy article with lots of information regarding Oracle on vSphere.
  • This VMware KB article describes how to enable centralized logging for vCloud Director cells. This is particularly important for HA environments, where VMware’s recommended HA strategy involves the use of multiple vCD cells.

I guess I should wrap it up here, before this post gets any longer. Thanks for reading this far, and feel free to speak up in the comments!

Tags: , , , , , , , , , , , , , ,

A recent post by Microsoft on the Windows Virtualization Team Blog titled “Hyper-V VM Density, VP:LP Ratio, Cores and Threads” caught my eye this morning as I was scanning my RSS feeds. In this post, the author (the anonymous WSV_GUY) works through the idea of cores vs. logical processors. The distinction here, in case you didn’t already know, is that many modern multi-core CPUs also support symmetric multi-threading (SMT, also referred to as hyperthreading), which means that an eight core CPU can actually process 16 threads simultaneously and would therefore be considered to have 16 logical processors.

<aside>I can see where this might be an area of some confusion; in fact, I was just discussing hyperthreading with a colleague last week. In my opinion, it’s far more accurate to refer to current-generation functionality as SMT than hyperthreading, but that’s another story for another day.</aside>

What really caught my eye was the part of the article where the author compares and contrasts Microsoft’s approach and others’ approaches. I’ve taken a screenshot here in case the original article changes. Keep in mind that the article is based on the discussion of maximum virtual CPUs (or VPs, as WSV_GUY calls them) per logical CPU:

Microsoft blog quote
Figure 1. Screenshot of Microsoft blog post

So, two things pop to mind immediately. Let’s take these in order.

First—since it’s fairly obvious that Microsoft is targeting VMware as the primary “other virtualization vendor”—it should be noted that VMware does not consistently use cores as their unit of measure. As a point of proof, I present to you this screenshot taken from VMware’s Configuration Maximums document for vSphere 4.1 (available in PDF here). I’ve taken the liberty of highlighting the two key takeaways:

VMware configuration maximums document
Figure 2. Screenshot of VMware configuration maximums document

As you can see from the documentation, VMware inconsistently switches back and forth from logical CPUs to cores. From that perspective, VMware has some work to do on presenting consistent messaging and consistent documentation. Point taken. VMware, are you listening?

But that’s not really my major beef with the article.

The second thing I noted was the statement in the Microsoft blog (see Figure 1) about “Vendor A” and statements about ratios. Remember that the entire blog post appears to be about maximum ratios: “Vendor A response 16:1 (with the qualifier that your mileage will vary)”. It seems to me that the author is referring to the statement at the bottom of the VMware configuration maximums document (see Figure 2) that discusses the achievable number of virtual processors per core. However, we’re not talking about achievable ratios, we’re talking about maximum ratios, right? Or are we?

Although the Microsoft author appears to ding VMware for making a statement about achievable ratios in an article discussing maximum supported ratios, later in the same article the author does the same thing (the emphasis is mine):

You can see that even with an 8:1 VP to LP ratio (or 16:1 VP: Core, if you prefer), Hyper-V supports very dense VM configurations. Even on a server with two physical processors, Hyper-V supports a staggering number of virtual machines (up to 256). The limiting factor won’t be Hyper-V. It will be how much memory you’ve populated the server with and how well the storage subsystem performs.

Sounds to me like Microsoft is saying that they have a maximum ratio of virtual CPUs to logical CPUs, but that the actual ratio can you achieve (the achievable ratio?) might be less than that. How is that any different from the statement in VMware’s configuration maximums document? How is Microsoft’s “approach” with regard to ratios any different, better, or clearer for the customer? Yes, VMware’s documentation is inconsistent. But when it comes to maximum ratios vs. achievable ratios, it seems to me that the pot is calling the kettle black.

If I’m off or I’m overlooking something, please let me know by speaking up in the comments. Please use full disclosure of your employer where that employment might affect your viewpoint. Thanks!

Tags: , , , ,

« Older entries