Microsoft

You are currently browsing articles tagged Microsoft.

Welcome to Technology Short Take #42, another installation in my ongoing series of irregularly published collections of news, items, thoughts, rants, raves, and tidbits from around the Internet, with a focus on data center-related technologies. Here’s hoping you find something useful!

Networking

  • Anthony Burke’s series on VMware NSX continues with part 5.
  • Aaron Rosen, a Neutron contributor, recently published a post about a Neutron extension called Allowed-Address-Pairs and how you can use it to create high availability instances using VRRP (via keepalived). Very cool stuff, in my opinion.
  • Bob McCouch has a post over at Network Computing (where I’ve recently started blogging as well—see my first post) discussing his view on how software-defined networking (SDN) will trickle down to small and mid-sized businesses. He makes comparisons among server virtualization, 10 Gigabit Ethernet, and SDN, and feels that in order for SDN to really hit this market it needs to be “not a user-facing feature, but rather a means to an end” (his words). I tend to agree—focusing on SDN is focusing on the mechanism, rather than focusing on the problems the mechanism can address.
  • Want or need to use multiple external networks in your OpenStack deployment? Lars Kellogg-Stedman shows you how in this post on multiple external networks with a single L3 agent.

Servers/Hardware

  • There was some noise this past week about Cisco UCS moving into the top x86 blade server spot for North America in Q1 2014. Kevin Houston takes a moment to explore some ideas why Cisco was so successful in this post. I agree that Cisco had some innovative ideas in UCS—integrated management and server profiles come to mind—but my biggest beef with UCS right now is that it is still primarily a north/south (server-to-client) architecture in a world where east/west (server-to-server) traffic is becoming increasingly critical. Can UCS hold on in the face of a fundamental shift like that? I don’t know.

Security

  • Need to scramble some data on a block device? Check out this command. (I love the commandlinefu.com site. It reminds me that I still have so much yet to learn.)

Cloud Computing/Cloud Management

  • Want to play around with OpenDaylight and OpenStack? Brent Salisbury has a write-up on how to OpenStack Icehouse (via DevStack) together with OpenDaylight.
  • Puppet Labs has released a module that allows users to programmatically (via Puppet) provision and configure Google Compute Platform (GCP) instances. More details are available in the Puppet Labs blog post.
  • I love how developers come up with these themes around certain projects. Case in point: “Heat” is the name of the project for orchestrating resources in OpenStack, HOT is the name for the format of Heat templates, and Flame is the name of a new project to automatically generate Heat templates.

Operating Systems/Applications

  • I can’t imagine that anyone has been immune to the onslaught of information on Docker, but here’s an article that might be helpful if you’re still looking for a quick and practical introduction.
  • Many of you are probably familiar with Razor, the project that former co-workers Nick Weaver and Tom McSweeney created when they were at EMC. Tom has since moved on to CSC (via the vCHS team at VMware) and has launched a “next-generation” version of Razor called Hanlon. Read more about Hanlon and why this is a new/separate project in Tom’s blog post here.
  • Looking for a bit of clarity around CoreOS and Project Atomic? I found this post by Major Hayden to be extremely helpful and informative. Both of these projects are on my radar, though I’ll probably focus on CoreOS first as the (currently) more mature solution.
  • Linux Journal has a nice multi-page write-up on Docker containers that might be useful if you are still looking to understand Docker’s basic building blocks.
  • I really enjoyed Donnie Berkholz’ piece on microservices and the migrating Unix philosophy. It was a great view into how composability can (and does) shift over time. Good stuff, I highly recommend reading it.
  • cURL is an incredibly useful utility, especially in today’s age of HTTP-based REST API. Here’s a list of 9 uses for cURL that are worth knowing. This article on testing REST APIs with cURL is handy, too.
  • And for something entirely different…I know that folks love to beat up AppleScript, but it’s cross-application tasks like this that make it useful.

Storage

  • Someone recently brought the open source Open vStorage project to my attention. Open vStorage compares itself to VMware VSAN, but supporting multiple storage backends and supporting multiple hypervisors. Like a lot of other solutions, it’s implemented as a VM that presents NFS back to the hypervisors. If anyone out there has used it, I’d love to hear your feedback.
  • Erik Smith at EMC has published a series of articles on “virtual storage networks.” There’s some interesting content there—I haven’t finished reading all of the posts yet, as I want to be sure to take the time to digest them properly. If you’re interested, I suggest starting out with his introductory post (which, strangely enough, wasn’t the first post in the series), then moving on to part 1, part 2, and part 3.

Virtualization

  • Did you happen to see this write-up on migrating a VMware Fusion VM to VMware’s vCloud Hybrid Service? For now—I believe there are game-changing technologies out there that will alter this landscape—one of the very tangible benefits of vCHS is its strong interoperability with your existing vSphere (and Fusion!) workloads.
  • Need a listing of the IP addresses in use by the VMs on a given Hyper-V host? Ben Armstrong shares a bit of PowerShell code that produces just such a listing. As Ben points out, this can be pretty handy when you’re trying to track down a particular VM.
  • vCenter Log Insight 2.0 was recently announced; Vladan Seget has a decent write-up. I’m thinking of putting this into my home lab soon for gathering event information from VMware NSX, OpenStack, and the underlying hypervisors. I just need more than 24 hours in a day…
  • William Lam has an article on lldpnetmap, a little-known utility for mapping ESXi interfaces to physical switches. As the name implies, this relies on LLDP, so switches that don’t support LLDP or that don’t have LLDP enabled won’t work correctly. Still, a useful utility to have in your toolbox.
  • Technology previews of the next versions of Fusion (Fusion 7) and Workstation (Workstation 11) are available; see Eric Sloof’s articles (here and here for Fusion and Workstation, respectively) for more details.
  • vSphere 4 (and associated pieces) are no longer under general support. Sad face, but time stops for no man (or product).
  • Having some problems with VMware Fusion’s networking? Cody Bunch channels his inner Chuck Norris to kick VMware Fusion networking in the teeth.
  • Want to preview OS X Yosemite? Check out William Lam’s guide to using Fusion or vSphere to preview the new OS X beta release.

I’d better wrap this up now, or it’s going to turn into one of Chad’s posts. (Just kidding, Chad!) Thanks for taking the time to read this far!

Tags: , , , , , , , , , , , , , , ,

This is session EDCS008, “Virtualizing the Network to Enable a Software-Defined Infrastructure (SDI).” The speakers are Brian Johnson (@thehevy on Twitter) from Intel and Jim Pinkerton from Microsoft. Brian is a Solution Architect; Jim is a Windows Server Architect. If you’ve ever been in one of Brian’s presentations, you know he does a great job of really diving deep in some of this stuff. (Can you tell I’m a fan?)

Brian starts the session with a review of how the data center has evolved over the last 10 years or so, driven by the widespread adoption of compute virtualization, increased CPU capacity, and the adoption of 10Gb Ethernet. This naturally leads to a discussion of software-defined networking (SDN) as a means whereby the network can evolve to keep up the rapid pace of change and innovation in other areas of the data center. Why is this a big deal? Brian draws the comparison between property management and how IT is shaping:

  • A rental house is pretty easy to manage. One tenant, infrequent change, long-term investments.
  • An apartment means more tenants, but still relatively infrequent change.
  • A hotel means lots of tenants and the ability to handle frequent change and lots of room turnover.

The connection here is VMs—we’re now running lots of VMs, and the VMs change regularly. The infrastructure needs to be ready to handle this rapid pace of change.

At this point, Jim Pinkerton of Microsoft takes over to discuss how Windows Server thinks about this issue and these challenges. According to Jim, the world has moved beyond virtualization—it now needs the ability to scale and secure many workloads cost-effectively. You need greater automation, and you need to support any type of application. Jim talks about private clouds, hosting (IaaS-type services), and public clouds. He points out that MTTR (Mean Time to Repair) is a more important metric than MTBF (Mean Time Between Failures).

Driven by how the data center is evolving (the points in the previous paragraph), the network needs to be evolved:

  • Deliver networking as part of a pooled, automated infrastructure
  • Ensure multitenant isolation, scale, and performance
  • Expand data center capacity seamlessly as per business needs
  • Reduce operational complexity

Out of these design principles comes SDN, according to Pinkerton. Key attributes of SDN, according to Microsoft, are flexibility, control, and automation. At this point Pinkerton digresses into a discussion of SMB3 and its performance characteristics over 10Gb Ethernet—which, frankly, is completely unrelated to the topic of the presentation. After a few slides of discussing SMB3 with very little relevance to the rest of the discussion, Pinkerton moves back into a discussion of the virtual switch found in Windows Server 2012 R2.

Brian now takes over again, focusing on virtual switch performance and behavior. East-west traffic between VMs can hit 60–70Gbps, because it all happens inside the server. How do we maintain that traffic performance when we see east-west traffic between servers? We can deploy more interfaces, which is commonly seen. Moving to 10Gb Ethernet is another solution. Intel needed to add features to their network controllers—features like stateless offloads, virtual machine queues, and SR-IOV support—in order to drive performance for multiple 10Gb Ethernet interfaces. SR-IOV can help address some performance and utilization concerns, but this presents a problem when working with network virtualization. If you’re bypassing the hypervisor, how do you get on the virtual network?

Brian leaves this question for now to talk about how network virtualization with overlays helps address some of the network provisioning concerns that exist today. He provides an example of how using overlays—he uses NVGRE, since this is a joint presentation with Microsoft—can allow tenants (customers, internal business units, etc.) to share private address spaces and eliminate many manual VLAN configuration tasks. He makes the point that network virtualization is possible without SDN, but SDN makes it much easier and simpler to manage and implement network virtualization.

One drawback of overlays is that many network interface cards (NICs) today don’t “understand” the overlays, and therefore can’t perform certain hardware offloads that help optimize traffic and utilization. However, Brian shows a next-gen Intel NIC that will understand network overlays and will be able to perform offloads (like LSO, RSS, and VMQ) on encapsulated traffic.

This leads Brian to a discussion of Intel Open Network Platform (ONP), which encompasses two aspects:

  1. Intel ONP Switch reference design (aka “Seacliff Trail”), which leverages Intel silicon to support SDN and network Virtualization
  2. Intel ONP Server reference design, which shows how to optimize virtual switching using Intel’s Data Plane Development Kit (DPDK)

The Intel ONP Server reference design (sorry, can’t remember the code name) actually uses Open vSwitch (OVS) as a core part of its design.

Intel ONP includes something called FlexPipe (this is part of the Intel FM6700 chipset) to enable faster innovation and quicker support for encapsulation protocols (like NVGRE, VXLAN, and whatever might come next). The Intel ONP Switch supports serving as a bridge to connect physical workloads into virtual networks that are encapsulated, and being able to do this at full line rate using 40Gbps uplinks.

At this point, Brian and Jim wrap up the session and open up for questions and answers.

Tags: , , , ,

IDF 2013: Keynote, Day 2

This is a liveblog of the day 2 keynote at Intel Developer Forum (IDF) 2013 in San Francisco. (Here is the link for the liveblog from the day 1 keynote.)

The keynote starts a few minutes after 9am, following a moment of silence to observe 9/11. Following that, Ulmonth Smith (VP of Sales and Marketing) takes the stage to kick off the keynote. Smith takes a few moments to recount yesterday’s keynote, particularly calling out the Quark announcement. Today’s keynote speakers are Kirk Skaugen, Doug Fisher, and Dr. Hermann Eul. The focus of the keynote is going to be mobility.

The first to take the stage is Doug Fisher, VP and General Manager of the Software and Services Group. Fisher sets the stage for people interacting with multiple devices, and devices that are highly mobile, supported by software and services delivered over a ubiquitous network connection. Mobility isn’t just the device, it isn’t just the software and services, it isn’t just the ecosystem—it’s all of these things. He then introduces Hermann Eul.

Eul takes the stage; he’s the VP and General Manager of the Mobile and Communications Group at Intel. Eul believes that mobility has improved our complex lives in immeasurable ways, though the technology masks much of the complexity that is involved in mobility. He walks through an example of taking a picture of “the most frequently found animal on the Internet—the cat.” Eul walks through the basic components of the mobile platform, which includes not only hardware but also mobile software. Naturally, a great CPU is key to success. This leads Eul into a discussion of the Intel Silvermont core: built with 22nm Tri-Gate transistors, multi-core architecture, 64-bit support, and a wide dynamic power operating range. This leads Eul into today’s announcement: the introduction of the Bay Trail reference platform.

Bay Trail is a mobile computing experience reference architecture. It leverages a range of Intel technologies: next-gen Intel multi-core SoC, Intel HD graphics, on-demand performance with Intel Burst Technology 2.0, and a next-gen programmable ISP (Image Service Processor). Eul then leads into a live demo of a Bay Trail product. It appears it’s running some flavor of Windows. Following that demo, Jerry Shen (CEO of Asus) takes the stage to show off the Asus T100, a Bay Trail-based product that boasts touchscreen IPS display, stereo audio, detachable keyboard dock, and an 11 hour battery life.

Following the Asus demo, Victoria Molina—a fashion industry executive—takes the stage to talk about how technology has/will shape online shopping. Molina takes us through a quasi-live demo about virtual shopping software that leverages 3-D avatars and your personal measurements. As the demo proceeds, they show you a “fit view” that shows how tight or loose the garments will fit. The software also does a “virtual cat walk” that shows how the garments will look as you walk and move around. Following the final Bay Trail demo, Eul wraps up the discussion with a review of some of the OEMs that will be introducing Bay Trail-based products. At this point, he introduces Neil Hand from Dell to introduce his Bay Trail-based product. Hand shows a Windows 8-based 8" tablet from Dell, the start of a new family of products that will be branded Venue.

What’s next after Bay Trail? Eul shares some roadmap plans. Next up is the Merrifield platform, which will increase performance, graphics, and battery life. In 2014 will come Advanced LTE (A-LTE). Farther out is 14nm technology, called Airmont.

The final piece from Eul is a demonstration of Bay Trail and some bracelets that were distributed to the attendees, in which he uses an Intel-based Samsung tablet to control the bracelets, making them change colors, blink, and make patterns.

Now Kirk Skaugen takes the stage. Skaugen is a Senior VP and General Manager of the PC Client Group. He starts his portion of the keynote discussing the introduction of the Ultrabook, and how that form factor has evolved over the last few years to include things like touch support and 2-in–1 form factors. Skaugen takes some time to describe more fully the specifications around 2-in–1 devices, coming from hardware partners like Dell, HP, Lenovo, Panasonic, Sony, and Toshiba. This leads into a demonstration of a variety of 2-in–1 devices: sliders, fold-over designs, detachables (where the keyboard detaches), and “ferris wheel” designs where the screen flips. Now taking the stage is Tami Reller from Microsoft, whose software powered all the 2-in–1 demonstrations that Intel just showed. The keynote sort of digresses into a “Microsoft Q&A” for a few minutes before getting back on track with some Intel announcements.

From a more business-focused perspective, Intel announces 4th generation vPro-enabled Intel Core processors. Location-based services are also being integrated into vPro to enable location-based services (one example provided is automatically restricting access to confidential documentation when they leave the building). Intel also announced (this week) the Intel SSD Pro 1500. Additionally, Intel is announcing Intel Pro WiDi (Wireless Display) to better integrate wireless projectors. Finally, Intel is working with Cisco to eliminate passwords entirely. They are doing that via Intel Identity Password, which embeds keys into the hardware to enable passwordless VPNs.

Taking the stage now is Mario Müller, VP of IT Infrastructure at BMW. Müller talks about how Intel Atom CPUs are built into BMW cars, especially the new BMW i8 (BMW’s first all-electric car, if I heard correctly). He also refers to some new deployments within BMW that will leverage the new vPro-enabled Intel Core CPUs, many of which will be Ultrabooks. Müller indicates that 2-in–1 is useful, not for all employees, but certainly for select individuals who need that functionality.

Skaugen now announces Bay Trail M and Bay Trail D reference platforms. While Bay Trail (also referred to as Bay Trail T) is intended for tablets, but the M and D platforms will help drive innovation in mobile and desktop form factors. After a quick look at some hardware prototypes, Skaugen takes a moment to look ahead at what Intel will be doing over the next year or so. He shows 30% reductions in power usage coming from Broadwell, which will be the 14nm technology Intel will introduce next year. From there, Skaugen shifts into a discussion of perceptual computing (3D support). He shows off a 3-D camera that can be embedded into the bezel of an ultrabook, then shows a video of kids interacting with a prototype hardware and software combination leveraging Intel’s 3-D/perceptual computing support.

And now Doug Fisher returns to the stage. He starts his portion of the keynote by returning to the Intel-Microsoft partnership and focusing on innovations like fast start, longer battery life, touch- and sensor-awareness, on a highly responsive platform that also offers full compatibility around applications and devices. Part of Fisher’s presentation includes tools for developers to help make their applications aware of the 2-in–1 form factor, so that applications can automatically adjust their behavior and UI based on the form factor of the device on which they’re running.

Intel is also working closely with Google to enhance Android on Intel. This includes work on the Dalvik runtime, optimized drivers and firmware, key kernel contributions, and the NDK app bridging technology that will allow apps developed for other platforms (iOS?) to run on Android. Fisher next introduces Gonzague de Vallois of GameLoft, a game developer. Vallois talks about how they have been developing natively on Intel architecture and shows an example of a game they’ve written running on a Bay Trail T-based platform. The tools, techniques, and contributions that Intel have with Android are also being applied to Chrome OS. Fisher brings Sundar Pichai from Google on to the stage. Pichai is responsible for both Android and Chrome OS, and he talks about the momentum he’s seeing on both platforms.

Fisher says that Intel believes HTML5 to be an ideal mechanism for crossing platform boundaries, and so Intel is announcing a new version of their XDK for HTML5 development. This leads into a demonstration of using the Intel XDK (which stands for “cross-platform development kit”) to build a custom application that runs across multiple platforms. With that, he concludes the general session for day 2.

Tags: , , , ,

Welcome to Technology Short Take #29! This is another installation in my irregularly-published series of links, thoughts, rants, and raves across various data center-related fields of technology. As always, I hope you find something useful here.

Networking

  • Who out there has played around with Mininet yet? Looks like this is another tool I need to add to my toolbox as I continue to explore networking technologies like OpenFlow, Open vSwitch, and others.
  • William Lam has a recent post on some useful VXLAN commands found in ESXCLI with vSphere 5.1. I’m a CLI fan, so I like this sort of stuff.
  • I still have a lot to learn about OpenFlow and networking, but this article from June of last year (it appears to have been written by Ivan Pepelnjak) discusses some of the potential scalability concerns around early versions of the OpenFlow protocol. In particular, the use of OpenFlow to perform granular per-flow control when there are thousands (or maybe only hundreds) of flows presents a scalability challenge (for now, at least). In my mind, this isn’t an indictment of OpenFlow, but rather an indictment of the way that OpenFlow is being used. I think that’s the point Ivan tried to make as well—it’s the architecture and how OpenFlow is used that makes a difference. (Is that a reasonable summary, Ivan?)
  • Brad Hedlund (who will be my co-worker starting on 2/11) created a great explanation of network virtualization that clearly breaks down the components and explains their purpose and function. Great job, Brad.
  • One of the things I like about Open vSwitch (OVS) is that it is so incredibly versatile. Case in point: here’s a post on using OVS to connect LXC containers running on different hosts via GRE tunnels. Handy!

Servers/Hardware

  • Cisco UCS is pretty cool in that it makes automation of compute hardware easier through such abstractions as server profiles. Now, you can also automate UCS with Chef. I traded a few tweets with some Puppet folks, and they indicated they’re looking at this as well.
  • Speaking of Puppet and hardware, I also saw a mention on Twitter about a Puppet module that will manage the configuration of a NetApp filer. Does anyone have a URL with more information on that?
  • Continuing the thread on configuration management systems running on non-compute hardware (I suppose this shouldn’t be under the “Servers/Hardware” section any longer!), I also found references to running CFEngine on network apliances and running Chef on Arista switches. That’s kind of cool. What kind of coolness would result from even greater integration between an SDN controller and a declarative configuration management tool? Hmmm…

Security

  • Want full-disk encryption in Ubuntu, using AES-XTS-PLAIN64? Here’s a detailed write-up on how to do it.
  • In posts and talks I’ve given about personal productivity, I’ve spoken about the need to minimize “friction,” that unspoken drag that makes certain tasks or workflows more difficult and harder to adopt. Tal Klein has a great post on how friction comes into play with security as well.

Cloud Computing/Cloud Management

  • If you, like me, are constantly on the search for more quality information on OpenStack and its components, then you’ll probably find this post on getting Cinder up and running to be helpful. (I did, at least.)
  • Mirantis—recently the recipient of $10 million in funding from various sources—posted a write-up in late November 2012 on troubleshooting some DNS and DHCP service configuration issues in OpenStack Nova. The post is a bit specific to work Mirantis did in integrating an InfoBlox appliance into OpenStack, but might be useful in other situation as well.
  • I found this article on Packstack, a tool used to transform Fedora 17/18, CentOS 6, or RHEL 6 servers into a working OpenStack deployment (Folsom). It seems to me that lots of people understand that getting an OpenStack cloud up and running is a bit more difficult than it should be, and are therefore focusing efforts on making it easier.
  • DevStack is another proof point of the effort going into make it easier to get OpenStack up and running, although the focus for DevStack is on single-host development environments (typically virtual themselves). Here’s one write-up on DevStack; here’s another one by Cody Bunch, and yet another one by the inimitable Brent Salisbury.

Operating Systems/Applications

  • If you’re interested in learning Puppet, there are a great many resources out there; in fact, I’ve already mentioned many of them in previous posts. I recently came across these Example42 Puppet Tutorials. I haven’t had the chance to review them myself yet, but it looks like they might be a useful resource as well.
  • Speaking of Puppet, the puppet-lint tool is very handy for ensuring that your Puppet manifest syntax is correct and follows the style guidelines. The tool has recently been updated to help fix issues as well. Read here for more information.

Storage

  • Greg Schulz (aka StorageIO) has a couple of VMware storage tips posts you might find useful reading. Part 1 is here, part 2 is here. Enjoy!
  • Amar Kapadia suggests that adding LTFS to Swift might create an offering that could give AWS Glacier a real run for the money.
  • Gluster interests me. Perhaps it shouldn’t, but it does. For example, the idea of hosting VMs on Gluster (similar to the setup described here) seems quite interesting, and the work being done to integrate KVM/QEMU with Gluster also looks promising. If I can ever get my home lab into the right shape, I’m going to do some testing with this. Anyone done anything with Gluster?
  • Erik Smith has a very informative write-up on why FIP snooping is important when using FCoE.
  • Via this post on ten useful OpenStack Swift features, I found this page on how to build the “Swift All in One,” a useful VM for learning all about Swift.

Virtualization

  • There’s no GUI for it, but it’s kind of cool that you can indeed create VM anti-affinity rules in Hyper-V using PowerShell. This is another example of how Hyper-V continues to get more competent. Ignore Microsoft and Hyper-V at your own risk…
  • Frank Denneman takes a quick look at using user-defined NetIOC network resource pools to isolate and protect IP-based storage traffic from within the guest (i.e., using NFS or iSCSI from within the guest OS, not through the VMkernel). Naturally, this technique could be used to “protect” or “enhance” other types of important traffic flows to/from your guest OS instances as well.
  • Andre Leibovici has a brief write-up on the PowerShell module for the Nicira Network Virtualization Platform (NVP). Interesting stuff…
  • This write-up by Falko Timme on using BoxGrinder to create virtual appliances for KVM was interesting. I might have to take a look at BoxGrinder and see what it’s all about.
  • In case you hadn’t heard, OVF 2.0 has been announced/released by the DMTF. Winston Bumpus of VMware’s Office of the CTO has more information in this post. I also found the OVF 2.0 frequently asked questions (FAQs) to be helpful. Of course, the real question is how long it will be before vendors add support for OVF 2.0, and how extensive that support actually is.

And that’s it for this time around! Feel free to share your thoughts, suggestions, clarifications, or corrections in the comments below. I encourage your feedback, and thanks for reading.

Tags: , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #28, the first Technology Short Take for 2013. As always, I hope that you find something useful or informative here. Enjoy!

Networking

  • Ivan Pepelnjak recently wrote a piece titled “Edge and Core OpenFlow (and why MPLS is not NAT)”. It’s an informative piece—Ivan’s stuff is always informative—but what really drew my attention was his mention of a paper by Martin Casado, Teemu Koponen, and others that calls for a combination of MPLS and OpenFlow (and an evolution of OpenFlow into “edge” and “core” versions) to build next-generation networks. I’ve downloaded the paper and intend to review it in more detail. I’d love to hear from any networking experts who’ve read the paper—what are your thoughts?
  • Speaking of Ivan…it also appears that he’s quite pleased with Microsoft’s implementation of NVGRE in Hyper-V. Sounds like some of the other vendors need to get on the ball.
  • Here’s a nice explanation of CloudStack’s physical networking architecture.
  • The first fruits of Brad Hedlund’s decision to join VMware/Nicira have shown up in this joint article by Brad, Bruce Davie, and Martin Casado describing the role of network virutalization in the software-defined data center. (It doesn’t matter how many times I say or write “software-defined data center,” it still feels like a marketing term.) This post is fairly high-level and abstract; I’m looking forward to seeing more detailed and in-depth posts in the future.
  • Art Fewell speculates that the networking industry has “lost our way” and become a “big bag of protocols” in this article. I do agree with one of the final conclusions that Fewell makes in his article: that SDN (a poorly-defined and often over-used term) is the methodology of cloud computing applied to networking. Therefore, SDN is cloud networking. That, in my humble opinion, is a more holistic and useful way of looking at SDN.
  • It appears that the vCloud Connector posts (here and here) that (apparently) incorrectly identify VXLAN as a component/prerequisite of vCloud Connector have yet to be corrected. (Hat tip to Kenneth Hui at VCE.)

Servers/Hardware

Nothing this time around, but I’ll watch for content to include in future posts.

Security

  • Here’s a link to a brief (too brief, in my opinion, but perhaps I’m just being overly critical) post on KVM virtualization security, authored by Dell TechCenter. It provides some good information on securing the libvirt communication channel.

Cloud Computing/Cloud Management

  • Long-time VMware users probably remember Mike DiPetrillo, whose website has now, unfortunately, gone offline. I mention this because I’ve had this article on RabbitMQ AMQP with vCloud Director sitting in my list of “articles to write about” for a while, but some of the images were missing and I couldn’t find a link for the article. I finally found a link to a reprinted version of the article on DZone Enterprise Integration. Perhaps the article will be of some use to someone.
  • Sam Johnston talks about reliability in the cloud with a discussion on the merits of “reliable software” (software designed for failure) vs. “unreliable software” (more traditional software not designed for failure). It’s a good article, but I found the discussion between Sam and Massimo (of VMware) as equally useful.

Operating Systems/Applications

Storage

  • Want some good details on the space-efficient sparse disk format in vSphere 5.1? Andre Leibovici has you covered right here.
  • Read this article for good information from Andre on a potential timeout issue with recomposing desktops and using the View Storage Accelerator (aka context-based read cache, CRBC).
  • Apparently Cormac Hogan, aka @VMwareStorage on Twitter, hasn’t gotten the memo that “best practices” is now outlawed. He should have named this series on NFS with vSphere “NFS Recommended Practices”, but even misnamed as they are, the posts still have useful information. Check out part 1, part 2, and part 3.
  • If you’d like to get a feel for how VMware sees the future of flash storage in vSphere environments, read this.

Virtualization

  • This is a slightly older post, but informative and useful nevertheless. Cormac posted an article on VAAI offloads and KAVG latency when observed in esxtop. The summary of the article is that the commands esxtop is tracking are internal to the ESXi kernel only; therefore, abnormal KAVG values do not represent any sort of problem. (Note there’s also an associated VMware KB article.)
  • More good information from Cormac here on the use of the SunRPC.MaxConnPerIP advanced setting and its impact on NFS mounts and NFS connections.
  • Another slightly older article (from September 2012) is this one from Frank Denneman on how vSphere 5.1 handles parallel Storage vMotion operations.
  • A fellow IT pro contacted me on Twitter to see if I had any idea why some shares on his Windows Server VM weren’t working. As it turns out, the problem is related to hotplug functionality; the OS sees the second drive as “removable” due to hotplug functionality, and therefore shares don’t work. The problem is outlined in a bit more detail here.
  • William Lam outlines how to use new tagging functionality in esxcli in vSphere 5.1 for more comprehensive scripted configurations. The new tagging functionality—if I’m reading William’s write-up correctly—means that you can configure VMkernel interfaces for any of the supported traffic types via esxcli. Neat.
  • Chris Wahl has a nice write-up on the behavior of Network I/O Control with multi-NIC vMotion traffic. It was pointed out in the comments that the behavior Chris describes is documented, but the write-up is still handy, and an important factor to keep in mind in your designs.

I suppose I should end it here, before this “short take” turns into a “long take”! In any case, courteous comments are always welcome, so if you have additional information, clarifications, or corrections to share regarding any of the articles or links in this post, feel free to speak up below.

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #23, another collection of links and thoughts related to data center technologies like networking, storage, security, cloud computing, and virtualization. As usual, we have a fairly wide-ranging collection of items this time around. Enjoy!

Networking

  • A couple of days ago I learned that there are a couple open source implementations of LISP (Locator/ID Separation Protocol). There’s OpenLISP, which runs on FreeBSD, and there’s also a project called LISPmob that brings LISP to Linux. From what I can tell, LISPmob appears to be a bit more focused on the endpoint than OpenLISP.
  • In an earlier post on STT, I mentioned that STT’s re-use of the TCP header structure could cause problems with intermediate devices. It looks like someone has figured out how to allow STT through a Cisco ASA firewall; the configuration is here.
  • Jose Barreto posted a nice breakdown of SMB Multichannel, a bandwidth-enhancing feature of SMB 3.0 that will be included in Windows Server 2012. It is, unexpectedly, only supported between two SMB 3.0-capable endpoints (which, at this time, means two Windows Server 2012 hosts). Hopefully additional vendors will adopt SMB 3.0 as a network storage protocol. Just don’t call it CIFS!
  • Reading this article, you might deduce that Ivan really likes overlay/tunneling protocols. I am, of course, far from a networking expert, but I do have to ask: at what point does it become necessary (if ever) to move some of the intelligence “deeper” into the stack? Networking experts everywhere advocate the “complex edge-simple core” design, but does it ever make sense to move certain parts of the edge’s complexity into the core? Do we hamper innovation by insisting that the core always remain simple? As I said, I’m not an expert, so perhaps these are stupid questions.
  • Massimo Re Ferre posted a good article on a typical VXLAN use case. Read this if you’re looking for a more concrete example of how VXLAN could be used in a typical enterprise data center.
  • Bruce Davie of Nicira helps explain the difference between VPNs and network virtualization; this is a nice companion article to his colleague’s post (which Bruce helped to author) on the difference between network virtualization and software-defined networking (SDN).
  • The folks at Nicira also collaborated on this post regarding software overhead of tunneling. The results clearly favor STT (which was designed to take advantage of NIC offloading) over GRE, but the authors do admit that as “GRE awareness” is added to the cards that protocol’s performance will improve.
  • Oh, and while we’re on the topic of SDN…you might have noticed that VMware has taken to using the term “software-defined” to describe many of the services that vSphere (and related products) provide. This includes the use of software-defined networking (SDN) to describe the functionality of vSwitches, distributed vSwitches, vShield, and other features. Personally, I think that the term software-based networking (SBN) is far more applicable than SDN to what VMware does. It is just me?
  • Brad Hedlund wrote this post a few months ago, but I’m just now getting around to commenting about it. The gist of the article—forgive me if I munge it too much, Brad—is that the use of open source software components might dramatically change the shape/way/means in which networking protocols and standards are created and utilized. If two components are communicating over the network via open source components, is some sort of networking standard needed to avoid being “proprietary”? It’s an interesting thought, and goes to show the power of open source on the IT industry. Great post, Brad.
  • One more mention of OpenFlow/SDN: it’s great technology (and I’m excited about the possibilities that it creates), but it’s not a silver bullet for scalability.

Security

  • I came across this interesting post on a security attack based on VMDKs. It’s quite an interesting read, even if the probability of being able to actually leverage this attack vector is fairly low (as I understand it).

Storage

  • Chris Wahl has a good series on NFS with VMware vSphere. You can catch the start of the series here. One comment on the testing he performs in the “Same Subnet” article: if I’m not mistaken, I believe the VMkernel selection is based upon which VMkernel interface is listed in the first routing table entry for the subnet. This is something about which I wrote back in 2008, but I’m glad to see Chris bringing it to light again.
  • George Crump published this article on using DCB to enhance iSCSI. (Note: The article is quite favorable to Dell, and George discloses an affiliation with Dell at the end of the article.) One thing I did want to point out is that—if I recall correctly—the 802.1Qbb standard for Priority Flow Control only defines a single “no drop” class of service (CoS). Normally that CoS is assigned to FCoE traffic, but in an environment without FCoE you could assign it to iSCSI. In an environment with both, that could be a potential problem, as I see it. Feel free to correct me in the comments if my understanding is incorrect.
  • Microsoft is introducing data deduplication in Windows Server 2012, and here is a good post providing an introduction to Microsoft’s deduplication implementation.
  • SANRAD VXL looks interesting—anyone have any experience with it? Or more detailed technical information?
  • I really enjoyed Scott Drummonds’ recent storage performance analysis post. He goes pretty deep into some storage concepts and provides real-world, relevant information and recommendations. Good stuff.

Cloud Computing/Cloud Management

  • After moving CloudStack to the Apache Software Foundation, Citrix published this discourse on “open washing” and provides a set of questions to determine the “openness” of software projects with which you may become involved. While the article is clearly structured to favor Citrix and CloudStack, the underlying point—to understand exactly what “open source” means to your vendors—is valid and worth consideration.
  • Per the AWS blog, you can now export EC2 instances out of Amazon and into another environment, including VMware, Hyper-V, and Xen environments. I guess this kind of puts a dent in the whole “Hotel California” marketing play that some vendors have been using to describe Amazon.
  • Unless you’ve been hiding under a rock for the past few weeks, you’ve most likely heard about Nick Weaver’s Razor project. (If you haven’t heard about it, here’s Nick’s blog post on it.) To help with the adoption/use of Razor, Nick also recently announced an overview of the Razor API.

Virtualization

  • Frank Denneman continues to do a great job writing solid technical articles. The latest article to catch my eye (and I’m sure that I missed some) was this post on combining affinity rule types.
  • This is an interesting post on a vSphere 5 networking bug affecting iSCSI that was fixed in vSphere 5.0 Update 1.
  • Make a note of this VMware KB article regarding UDP traffic on Linux guests using VMXNET3; the workaround today is using E1000 instead.
  • This post is actually over a year old, but I just came across it: Luc Dekens posted a PowerCLI script that allows a user to find the maximum IOPS values over the last 5 minutes for a number of VMs. That’s handy. (BTW, I have fixed the error that kept me from seeing the post when it was first published—I’ve now subscribed to Luc’s blog.)
  • Want to use a Debian server to provide NFS for your VMware environment? Here is some information that might prove helpful.
  • Jeremy Waldrop of Varrow provides some information on creating a custom installation ISO for ESXi 5, Nexus 1000V, and PowerPath/VE. Cool!
  • Cormac Hogan continues to pump out some very useful storage-focused articles on the official VMware vSphere blog. For example, both the VMFS locking article and the article on extending an EagerZeroedThick disk were great posts. I sincerely hope that Cormac keeps up the great work.
  • Thanks to this Project Kronos page, I’ve been able to successfully set up XCP on Ubuntu Server 12.04 LTS. Here’s hoping it gets easier in future releases.
  • Chris Colotti takes on some vCloud Director “challenges”, mostly surrounding vShield Edge and vCloud Director’s reliance on vShield Edge for specific networking configurations. While I do agree with many of Chris’ points, I personally would disagree that using vSphere HA to protect vShield Edge is an acceptable configuration. I was also unable to find any articles that describe how to use vSphere FT to protect the deployed vShield appliances. Can anyone point out one or more of those articles? (Put them in the comments.)
  • Want to use Puppet to automate the deployment of vCenter Server? See here.

I guess it’s time to wrap up now, lest my “short take” get even longer than it already is! Thanks for reading this far, and I hope that I’ve shared something useful with you. Feel free to speak up in the comments if you have questions, thoughts, or clarifications.

Tags: , , , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #22! Once again, I find myself without too many articles to share with you this time around. I guess that will make things a bit easier for you, the reader, but it does make me question whether or not I’m “listening” to the right communities. If any readers have suggestions on sources of information to which I should be subscribing or I should be following, I’d love to hear your suggestions.

In any case, let’s get into the meat of it. I hope you find something useful!

Networking

Security

  • I have to agree with Tom Hollingsworth that we often create backdoors by design simply out of our own laziness. I’ve heard it said—in fact I may have used the statement myself—that no amount of security can fix stupidity. That might be a bit strong, but it does apply to the “shortcuts” that we create for ourselves or our customers in our designs.

Servers/Hardware

  • Kevin Houston (who works for Dell) posted an article about a recent test report comparing power usage between Dell blades and Cisco UCS blades. If you’re comparing these two solutions, find a comparable report from Cisco and then draw your own conclusions. (Always get multiple views on a topic like this, because every vendor—and I know because I work for a vendor, too—will spin the report in their favor.)

Virtualization

That’s it for this time around. I hope that you have found something useful here. If anyone has any suggestions for sites/forums they’ve found helpful with data center-focused topics, I’d love for you to add that information in the comments.

Tags: , , , , , , , ,

I just finished reading a post on ZDNet titled “Are Hyper-V and App-V the new Windows Servers?” in which the author—Ken Hess—postulates that the rise of virtualization will shape the future of the Microsoft Windows OS such that, in his words:

The Server OS itself is an application. It’s little more than (or hopefully a little less than) Server Core.

The author also advises his readers that they “have to learn a new vocabulary” and that they’ll “deploy services and applications as workloads.”

Does any of this sound familiar to you?

It should. Almost 6 years ago, I was carrying on a blog conversation (with a web site that is now defunct) about the future of the OS. I speculated at that point that the general-purpose OS as we then knew it would be gone within 5 to 10 years. It looks like that prediction might be reasonably accurate. (Sadly, I was horribly wrong about Mac OS X, but everyone’s allowed to be wrong now and then aren’t they?)

It should further sound familiar because almost 5 years ago, Srinivas Krishnamurti of VMware wrote an article describing a new (at the time) concept. This new concept was the idea of a carefully trimmed operating system (OS) instance that served as an application container:

By ripping out the operating system interfaces, functions, and libraries and automatically turning off the unnecessary services that your application does not require, and by tailoring it to the needs of the application, you are now down to a lithe, high performing, secure operating system – Just Enough of the Operating System, that is, or JeOS.

The idea of the server OS as an application container—what Ken suggests in very Microsoft-centric terms in his article—is not a new idea, but it is good to see those outside of the VMware space opening their eyes to the possibilities that a full-blown general purpose OS might not be the best answer anymore. Whether it is Microsoft’s technology or VMware’s technology that drives this innovation is a topic for another post, but it is pretty clear to me that this innovation is already occurring and will continue to occur.

The OS is dead, long live the OS!

<aside>If this is the case—and I believe that it is—what does this portend for massive OS upgrades such as Windows 8 (and Server 2012)?</aside>

Tags: , , , , ,

Yesterday I posted an article regarding SR-IOV support in the next release of Hyper-V, and I commented in that article that I hoped VMware added SR-IOV support to vSphere. A couple of readers commented about why I felt SR-IOV support was important, what the use cases might be, and what the potential impacts could be to the vSphere networking environment. Those are all excellent questions, and I wanted to take the time to discuss them in a bit more detail than simply a response to a blog comment.

First, it’s important to point out—and this was stated in John Howard’s original series of posts to which I linked; in particular, this post—that SR-IOV is a PCI standard; therefore, it could potentially be used with any PCI device that supports SR-IOV. While we often discuss this in the networking context, it’s equally applicable in other contexts, including the HBA/CNA space. Maybe it’s just because in my job at EMC I see some interesting things that might never see the light of day (sorry, can’t say any more!), but I could definitely see the use for the ability to have multiple virtual HBAs/CNAs in an ESXi host. Think about the ability to pass an HBA/CNA VF (virtual function) up to a guest operating system on a host, and what sorts of potential advantages that might give you:

  • The ability to zone on a per-VM basis
  • Per-VM (more accurate, per-initiator) visibility into storage traffic and storage trends

Of course, this sort of model is not without drawbacks: in its current incarnation, assigning PCI devices to VMs breaks vMotion. But is that limitation a byproduct of the current way it’s being done, and would SR-IOV help alleviate that potential concern or issue? It sounds like Microsoft has found a way to leverage SR-IOV for NIC assignment without sacrificing live migration support (see John’s latest SR-IOV post). I suspect that bringing SR-IOV awareness into the hypervisor—and potentially into the guest OS via each vendor’s paravirtualized device drivers, aka VMware Tools in a vSphere context—might go a long way to helping address the live migration concerns with direct device assignment. Of course, I’m not a developer or a programmer, so feel free to (courteously!) correct me in the comments.

Are there use cases beyond providing virtual HBAs/CNAs? Here are a couple questions to get you thinking:

  • Could you potentially leverage a single PCI fax board among multiple VMs (clearly you’d have to manage fax board capacity) to virtualize your fax servers?
  • Would the presentation of virtual GPUs to a guest OS eliminate the need for a paravirtualized video driver, and would the lack of a paravirtualized video driver streamline the virtualization layer even more? The same goes for virtual NICs.

I’m not saying that all these things are possible—again, I’m not a developer so I could be way off base—but it seems to me that SR-IOV at least enables us to consider these sorts of options.

Regarding networking, this is where I see a lot of potential for SR-IOV. While VMware’s networking code is highly optimized, the movement of Ethernet switching into hardware on a NIC that supports SR-IOV has got to free up some CPU cycles and virtualization overhead. It also seems to me that putting that Ethernet switching on an SR-IOV NIC and then adding 802.1Qbg (EVB/VEPA) support would be a sweet combination. Mix in a hypervisor-to-NIC control plane for dynamically provisioning SR-IOV VFs and you’ve got a solution where provisioning a VM on a host dynamically creates an SR-IOV VF, attaches it to the VM, and uses EVB to provision a new VLAN on-demand onto that NIC. Is that a “pie in the sky” dream scenario? I’m not so sure that it’s that far off.

What do you think? Please share your thoughts in the comments below. Where applicable, please provide disclosure. For example, I work for EMC, but I speak for myself.

Tags: , , , , ,

While browsing my list of RSS feeds tonight, I came across a series of articles by John Howard, a senior program manager on the Hyper-V team at Microsoft. The post was one of a series of posts describing SR-IOV support in the next version of Hyper-V, found in Windows “8″. I hadn’t heard that Microsoft was adding SR-IOV support to the next version of Hyper-V, so when I saw that I was surprised. Personally, I think SR-IOV support is a big deal (see the note at the end of this post for why).

If you’re not familiar with SR-IOV, I suggest you read this quick SR-IOV tutorial I published on this site in late 2009.

Here are the links to John’s SR-IOV in Hyper-V posts:

Everything you wanted to know about SR-IOV in Hyper-V, part 1
Everything you wanted to know about SR-IOV in Hyper-V, part 2
Everything you wanted to know about SR-IOV in Hyper-V, part 3
Everything you wanted to know about SR-IOV in Hyper-V, part 4
Everything you wanted to know about SR-IOV in Hyper-V, part 5

It’s great to see Microsoft adding SR-IOV support to Hyper-V; this brings SR-IOV out of the niche Linux market and into a broader, more mainstream market. This also applies some competitive pressure against market leader VMware, who now has to respond in some fashion—either by adding SR-IOV support to their ESXi hypervisor, or by explaining why SR-IOV support isn’t necessary. Personally, I hope that VMware does the former and not the latter.

(By the way, for those of you wondering why SR-IOV is important, there are lots of potential synergies here—in my view, at least—between hardware switching on an SR-IOV NIC and things like software-defined networking.)

Tags: , , , ,

« Older entries