Windows

You are currently browsing articles tagged Windows.

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!

Networking

  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.

Servers/Hardware

  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.

Security

  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the OpenNebula.org site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications

Storage

Virtualization

  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

IDF 2013: Keynote, Day 2

This is a liveblog of the day 2 keynote at Intel Developer Forum (IDF) 2013 in San Francisco. (Here is the link for the liveblog from the day 1 keynote.)

The keynote starts a few minutes after 9am, following a moment of silence to observe 9/11. Following that, Ulmonth Smith (VP of Sales and Marketing) takes the stage to kick off the keynote. Smith takes a few moments to recount yesterday’s keynote, particularly calling out the Quark announcement. Today’s keynote speakers are Kirk Skaugen, Doug Fisher, and Dr. Hermann Eul. The focus of the keynote is going to be mobility.

The first to take the stage is Doug Fisher, VP and General Manager of the Software and Services Group. Fisher sets the stage for people interacting with multiple devices, and devices that are highly mobile, supported by software and services delivered over a ubiquitous network connection. Mobility isn’t just the device, it isn’t just the software and services, it isn’t just the ecosystem—it’s all of these things. He then introduces Hermann Eul.

Eul takes the stage; he’s the VP and General Manager of the Mobile and Communications Group at Intel. Eul believes that mobility has improved our complex lives in immeasurable ways, though the technology masks much of the complexity that is involved in mobility. He walks through an example of taking a picture of “the most frequently found animal on the Internet—the cat.” Eul walks through the basic components of the mobile platform, which includes not only hardware but also mobile software. Naturally, a great CPU is key to success. This leads Eul into a discussion of the Intel Silvermont core: built with 22nm Tri-Gate transistors, multi-core architecture, 64-bit support, and a wide dynamic power operating range. This leads Eul into today’s announcement: the introduction of the Bay Trail reference platform.

Bay Trail is a mobile computing experience reference architecture. It leverages a range of Intel technologies: next-gen Intel multi-core SoC, Intel HD graphics, on-demand performance with Intel Burst Technology 2.0, and a next-gen programmable ISP (Image Service Processor). Eul then leads into a live demo of a Bay Trail product. It appears it’s running some flavor of Windows. Following that demo, Jerry Shen (CEO of Asus) takes the stage to show off the Asus T100, a Bay Trail-based product that boasts touchscreen IPS display, stereo audio, detachable keyboard dock, and an 11 hour battery life.

Following the Asus demo, Victoria Molina—a fashion industry executive—takes the stage to talk about how technology has/will shape online shopping. Molina takes us through a quasi-live demo about virtual shopping software that leverages 3-D avatars and your personal measurements. As the demo proceeds, they show you a “fit view” that shows how tight or loose the garments will fit. The software also does a “virtual cat walk” that shows how the garments will look as you walk and move around. Following the final Bay Trail demo, Eul wraps up the discussion with a review of some of the OEMs that will be introducing Bay Trail-based products. At this point, he introduces Neil Hand from Dell to introduce his Bay Trail-based product. Hand shows a Windows 8-based 8" tablet from Dell, the start of a new family of products that will be branded Venue.

What’s next after Bay Trail? Eul shares some roadmap plans. Next up is the Merrifield platform, which will increase performance, graphics, and battery life. In 2014 will come Advanced LTE (A-LTE). Farther out is 14nm technology, called Airmont.

The final piece from Eul is a demonstration of Bay Trail and some bracelets that were distributed to the attendees, in which he uses an Intel-based Samsung tablet to control the bracelets, making them change colors, blink, and make patterns.

Now Kirk Skaugen takes the stage. Skaugen is a Senior VP and General Manager of the PC Client Group. He starts his portion of the keynote discussing the introduction of the Ultrabook, and how that form factor has evolved over the last few years to include things like touch support and 2-in–1 form factors. Skaugen takes some time to describe more fully the specifications around 2-in–1 devices, coming from hardware partners like Dell, HP, Lenovo, Panasonic, Sony, and Toshiba. This leads into a demonstration of a variety of 2-in–1 devices: sliders, fold-over designs, detachables (where the keyboard detaches), and “ferris wheel” designs where the screen flips. Now taking the stage is Tami Reller from Microsoft, whose software powered all the 2-in–1 demonstrations that Intel just showed. The keynote sort of digresses into a “Microsoft Q&A” for a few minutes before getting back on track with some Intel announcements.

From a more business-focused perspective, Intel announces 4th generation vPro-enabled Intel Core processors. Location-based services are also being integrated into vPro to enable location-based services (one example provided is automatically restricting access to confidential documentation when they leave the building). Intel also announced (this week) the Intel SSD Pro 1500. Additionally, Intel is announcing Intel Pro WiDi (Wireless Display) to better integrate wireless projectors. Finally, Intel is working with Cisco to eliminate passwords entirely. They are doing that via Intel Identity Password, which embeds keys into the hardware to enable passwordless VPNs.

Taking the stage now is Mario Müller, VP of IT Infrastructure at BMW. Müller talks about how Intel Atom CPUs are built into BMW cars, especially the new BMW i8 (BMW’s first all-electric car, if I heard correctly). He also refers to some new deployments within BMW that will leverage the new vPro-enabled Intel Core CPUs, many of which will be Ultrabooks. Müller indicates that 2-in–1 is useful, not for all employees, but certainly for select individuals who need that functionality.

Skaugen now announces Bay Trail M and Bay Trail D reference platforms. While Bay Trail (also referred to as Bay Trail T) is intended for tablets, but the M and D platforms will help drive innovation in mobile and desktop form factors. After a quick look at some hardware prototypes, Skaugen takes a moment to look ahead at what Intel will be doing over the next year or so. He shows 30% reductions in power usage coming from Broadwell, which will be the 14nm technology Intel will introduce next year. From there, Skaugen shifts into a discussion of perceptual computing (3D support). He shows off a 3-D camera that can be embedded into the bezel of an ultrabook, then shows a video of kids interacting with a prototype hardware and software combination leveraging Intel’s 3-D/perceptual computing support.

And now Doug Fisher returns to the stage. He starts his portion of the keynote by returning to the Intel-Microsoft partnership and focusing on innovations like fast start, longer battery life, touch- and sensor-awareness, on a highly responsive platform that also offers full compatibility around applications and devices. Part of Fisher’s presentation includes tools for developers to help make their applications aware of the 2-in–1 form factor, so that applications can automatically adjust their behavior and UI based on the form factor of the device on which they’re running.

Intel is also working closely with Google to enhance Android on Intel. This includes work on the Dalvik runtime, optimized drivers and firmware, key kernel contributions, and the NDK app bridging technology that will allow apps developed for other platforms (iOS?) to run on Android. Fisher next introduces Gonzague de Vallois of GameLoft, a game developer. Vallois talks about how they have been developing natively on Intel architecture and shows an example of a game they’ve written running on a Bay Trail T-based platform. The tools, techniques, and contributions that Intel have with Android are also being applied to Chrome OS. Fisher brings Sundar Pichai from Google on to the stage. Pichai is responsible for both Android and Chrome OS, and he talks about the momentum he’s seeing on both platforms.

Fisher says that Intel believes HTML5 to be an ideal mechanism for crossing platform boundaries, and so Intel is announcing a new version of their XDK for HTML5 development. This leads into a demonstration of using the Intel XDK (which stands for “cross-platform development kit”) to build a custom application that runs across multiple platforms. With that, he concludes the general session for day 2.

Tags: , , , ,

As you may already know, my new role at EMC has me spending more time on open source projects like KVM, OpenStack, and others. Over the past few days, I’ve been working with KVM for a bit, and I wanted to share a few things I’ve learned along the way. Readers who are very familiar with KVM won’t likely find anything new here (although I do encourage you to share any additional information or insight in the comments).

The information presented here is based on working with KVM using libvirt. For those unfamiliar with libvirt, it is an open source framework/API that can help manage multiple hypervisors and virtual machines. If you use a different management layer—such as oVirt—then things might look a bit different, so keep that in mind. (My use of libvirt vs. oVirt is not an endorsement one way or another; I just happened to start with libvirt.) Also, I’m only focusing on command-line tools here; there are a number of graphical user interfaces (GUIs) available as well, which I may investigate further at some point in the future.

The Components of a KVM Guest

Let’s start with looking at the different components of a KVM guest. If you are familiar with VMware virtualization, you know that a VMware-based VM has essentially two components:

  • The VM definition, stored in a .VMX file
  • The VM’s storage, typically stored in one or more .VMDK files

From what I’ve been able to determine so far, KVM guests also have essentially two components:

  • The VM definition, defined in XML format
  • The VM’s storage, stored either as a volume managed by an LVM or as a file stored in a file system

You can look at the XML configuration of a KVM guest (more properly referred to as a domain) in a couple of different ways. Both involve the use of virsh, the command-line tool that is part of libvirt:

  • To edit the configuration of a guest, use virsh edit <Name of guest VM> and the system will open the XML configuration in your default editor.
  • To export the configuration of a guest, use virsh dumpxml <Name of guest VM>. This dumps the XML configuration to STDOUT; you can redirect the output to a file if you like.

Here’s a snippet of the XML configuration for a KVM guest:

...
    <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/vm01.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' unit='0'/>
    </disk>
...

The second component of a KVM guest is the storage; as I mentioned earlier, this can be a file on a file system or it can be a volume managed by a logical volume manager (LVM). The XML snippet above shows the configuration for a disk image stored as a file on a file system. This particular disk image is in QEMU raw format.

With that (very brief) overview complete, here is how you perform some common tasks with KVM guests.

Creating a KVM Guest

There are a couple of different ways to create a KVM guest:

  • Manually create the XML definition of the guest, then use virsh define <Name of XML file> to import the definition. You could, naturally, create a new XML definition based on an existing definition and just change a few parameters.
  • Use a libvirt-compatible tool, like virt-install, to create the guest definition.

Here’s a quick example of creating a KVM guest using virt-install (I’ve inserted backslashes and line breaks for readability):

virt-install --name vmname --ram 1024 --vcpus=1 \
--disk path=/var/lib/libvirt/images/vmname.img,size=20 \
--network bridge=br0 \
--cdrom /var/lib/libvirt/images/os-install.iso \
--graphics vnc --noautoconsole --hvm \
--os-variant win2k3

This command creates a KVM guest named “vmname”, with 1024 MB of RAM, a single vCPU, a 20 GB virtual disk (in the default QEMU raw format, thin provisioned so that not all 20 GB are allocated up front), connected to an OS installation ISO at the path specified and using hardware virtualization. The network connection is to a bridge called br0, which might be a fake bridge on Open vSwitch (which in my case it is). Access to the console is handled via VNC.

Full details on the various parameters for virt-install are available via the man page (or you can look here).

Starting and Stopping KVM Guests

This one is easy:

  • To start a VM, just run virsh start <Name of guest VM> and you’re off to the races.
  • To stop a VM, run virsh shutdown <Name of guest VM> and the guest will start an orderly shutdown. (BTW, I know that the orderly shutdown works for Ubuntu and Windows guests, but I haven’t figured out the exact mechanism yet. Any insight there?)

Changing Guest Configuration

If the VM is shut down, you can use the virsh edit <Name of guest VM> command to edit the XML definition directly.

If the VM is running, then only certain things can be done. You are supposed to be able to swap floppy or CD/DVD images using the virsh qemu-monitor-command command. In practice I found that it ejected CD/DVD images fine, but wouldn’t load a new CD/DVD ISO image. I’m still working on a fix for that one (tips are welcome).

Using Paravirtualized Drivers

For improved performance, you can also use paravirtualized drivers. (This is the equivalent of using VMXNET3 or PVSCSI in a VMware world.) So far I’ve only tested the network drivers on Ubuntu and Windows, but they seem to work just fine. (These paravirtualized drivers are referred to as the virtio drivers.)

To use virtio drivers for networking, edit the guest configuration like this:

<interface type='bridge'>
  <mac address='52:54:00:a0:55:ef'/>
  <source bridge='br0'/>
  <model type='virtio'/> (add this line)
</interface>

I found this page to be quite helpful in determining exactly how to enable the virtio network drivers. This part is the same for both Ubuntu and Windows guests; for Windows guests, you also have to install the virtio drivers into Windows. That can be a bit more problematic; I’d recommend copying them across the network before making the change above.

This change must be made while the guest is shut down, by the way.

Cold Migrations of KVM Guests

You can probably figure out how this one works by now:

  1. Shut down the guest using virsh shutdown <Name of guest VM>.
  2. Export the XML configuration using virsh dumpxml <Name of guest VM> > <Name of XML file>.
  3. Copy the XML definition you just created and the guest’s disk image (specified in the XML configuration) to the target node.
  4. Edit the XML configuration (as needed) to reflect any changes in paths.
  5. Define the guest on the new node using virsh define <Name of XML file>.

Obviously, this assumes an image-based disk, not a LVM-backed disk.

While hardly a comprehensive list of all the various operations that might need to be done with a KVM guest, hopefully this will at least get you started. I’ll post more as I learn more, and readers are encouraged to share tips, tricks, or other information in the comments below.

Tags: , , , ,

Welcome to Technology Short Take #23, another collection of links and thoughts related to data center technologies like networking, storage, security, cloud computing, and virtualization. As usual, we have a fairly wide-ranging collection of items this time around. Enjoy!

Networking

  • A couple of days ago I learned that there are a couple open source implementations of LISP (Locator/ID Separation Protocol). There’s OpenLISP, which runs on FreeBSD, and there’s also a project called LISPmob that brings LISP to Linux. From what I can tell, LISPmob appears to be a bit more focused on the endpoint than OpenLISP.
  • In an earlier post on STT, I mentioned that STT’s re-use of the TCP header structure could cause problems with intermediate devices. It looks like someone has figured out how to allow STT through a Cisco ASA firewall; the configuration is here.
  • Jose Barreto posted a nice breakdown of SMB Multichannel, a bandwidth-enhancing feature of SMB 3.0 that will be included in Windows Server 2012. It is, unexpectedly, only supported between two SMB 3.0-capable endpoints (which, at this time, means two Windows Server 2012 hosts). Hopefully additional vendors will adopt SMB 3.0 as a network storage protocol. Just don’t call it CIFS!
  • Reading this article, you might deduce that Ivan really likes overlay/tunneling protocols. I am, of course, far from a networking expert, but I do have to ask: at what point does it become necessary (if ever) to move some of the intelligence “deeper” into the stack? Networking experts everywhere advocate the “complex edge-simple core” design, but does it ever make sense to move certain parts of the edge’s complexity into the core? Do we hamper innovation by insisting that the core always remain simple? As I said, I’m not an expert, so perhaps these are stupid questions.
  • Massimo Re Ferre posted a good article on a typical VXLAN use case. Read this if you’re looking for a more concrete example of how VXLAN could be used in a typical enterprise data center.
  • Bruce Davie of Nicira helps explain the difference between VPNs and network virtualization; this is a nice companion article to his colleague’s post (which Bruce helped to author) on the difference between network virtualization and software-defined networking (SDN).
  • The folks at Nicira also collaborated on this post regarding software overhead of tunneling. The results clearly favor STT (which was designed to take advantage of NIC offloading) over GRE, but the authors do admit that as “GRE awareness” is added to the cards that protocol’s performance will improve.
  • Oh, and while we’re on the topic of SDN…you might have noticed that VMware has taken to using the term “software-defined” to describe many of the services that vSphere (and related products) provide. This includes the use of software-defined networking (SDN) to describe the functionality of vSwitches, distributed vSwitches, vShield, and other features. Personally, I think that the term software-based networking (SBN) is far more applicable than SDN to what VMware does. It is just me?
  • Brad Hedlund wrote this post a few months ago, but I’m just now getting around to commenting about it. The gist of the article—forgive me if I munge it too much, Brad—is that the use of open source software components might dramatically change the shape/way/means in which networking protocols and standards are created and utilized. If two components are communicating over the network via open source components, is some sort of networking standard needed to avoid being “proprietary”? It’s an interesting thought, and goes to show the power of open source on the IT industry. Great post, Brad.
  • One more mention of OpenFlow/SDN: it’s great technology (and I’m excited about the possibilities that it creates), but it’s not a silver bullet for scalability.

Security

  • I came across this interesting post on a security attack based on VMDKs. It’s quite an interesting read, even if the probability of being able to actually leverage this attack vector is fairly low (as I understand it).

Storage

  • Chris Wahl has a good series on NFS with VMware vSphere. You can catch the start of the series here. One comment on the testing he performs in the “Same Subnet” article: if I’m not mistaken, I believe the VMkernel selection is based upon which VMkernel interface is listed in the first routing table entry for the subnet. This is something about which I wrote back in 2008, but I’m glad to see Chris bringing it to light again.
  • George Crump published this article on using DCB to enhance iSCSI. (Note: The article is quite favorable to Dell, and George discloses an affiliation with Dell at the end of the article.) One thing I did want to point out is that—if I recall correctly—the 802.1Qbb standard for Priority Flow Control only defines a single “no drop” class of service (CoS). Normally that CoS is assigned to FCoE traffic, but in an environment without FCoE you could assign it to iSCSI. In an environment with both, that could be a potential problem, as I see it. Feel free to correct me in the comments if my understanding is incorrect.
  • Microsoft is introducing data deduplication in Windows Server 2012, and here is a good post providing an introduction to Microsoft’s deduplication implementation.
  • SANRAD VXL looks interesting—anyone have any experience with it? Or more detailed technical information?
  • I really enjoyed Scott Drummonds’ recent storage performance analysis post. He goes pretty deep into some storage concepts and provides real-world, relevant information and recommendations. Good stuff.

Cloud Computing/Cloud Management

  • After moving CloudStack to the Apache Software Foundation, Citrix published this discourse on “open washing” and provides a set of questions to determine the “openness” of software projects with which you may become involved. While the article is clearly structured to favor Citrix and CloudStack, the underlying point—to understand exactly what “open source” means to your vendors—is valid and worth consideration.
  • Per the AWS blog, you can now export EC2 instances out of Amazon and into another environment, including VMware, Hyper-V, and Xen environments. I guess this kind of puts a dent in the whole “Hotel California” marketing play that some vendors have been using to describe Amazon.
  • Unless you’ve been hiding under a rock for the past few weeks, you’ve most likely heard about Nick Weaver’s Razor project. (If you haven’t heard about it, here’s Nick’s blog post on it.) To help with the adoption/use of Razor, Nick also recently announced an overview of the Razor API.

Virtualization

  • Frank Denneman continues to do a great job writing solid technical articles. The latest article to catch my eye (and I’m sure that I missed some) was this post on combining affinity rule types.
  • This is an interesting post on a vSphere 5 networking bug affecting iSCSI that was fixed in vSphere 5.0 Update 1.
  • Make a note of this VMware KB article regarding UDP traffic on Linux guests using VMXNET3; the workaround today is using E1000 instead.
  • This post is actually over a year old, but I just came across it: Luc Dekens posted a PowerCLI script that allows a user to find the maximum IOPS values over the last 5 minutes for a number of VMs. That’s handy. (BTW, I have fixed the error that kept me from seeing the post when it was first published—I’ve now subscribed to Luc’s blog.)
  • Want to use a Debian server to provide NFS for your VMware environment? Here is some information that might prove helpful.
  • Jeremy Waldrop of Varrow provides some information on creating a custom installation ISO for ESXi 5, Nexus 1000V, and PowerPath/VE. Cool!
  • Cormac Hogan continues to pump out some very useful storage-focused articles on the official VMware vSphere blog. For example, both the VMFS locking article and the article on extending an EagerZeroedThick disk were great posts. I sincerely hope that Cormac keeps up the great work.
  • Thanks to this Project Kronos page, I’ve been able to successfully set up XCP on Ubuntu Server 12.04 LTS. Here’s hoping it gets easier in future releases.
  • Chris Colotti takes on some vCloud Director “challenges”, mostly surrounding vShield Edge and vCloud Director’s reliance on vShield Edge for specific networking configurations. While I do agree with many of Chris’ points, I personally would disagree that using vSphere HA to protect vShield Edge is an acceptable configuration. I was also unable to find any articles that describe how to use vSphere FT to protect the deployed vShield appliances. Can anyone point out one or more of those articles? (Put them in the comments.)
  • Want to use Puppet to automate the deployment of vCenter Server? See here.

I guess it’s time to wrap up now, lest my “short take” get even longer than it already is! Thanks for reading this far, and I hope that I’ve shared something useful with you. Feel free to speak up in the comments if you have questions, thoughts, or clarifications.

Tags: , , , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #22! Once again, I find myself without too many articles to share with you this time around. I guess that will make things a bit easier for you, the reader, but it does make me question whether or not I’m “listening” to the right communities. If any readers have suggestions on sources of information to which I should be subscribing or I should be following, I’d love to hear your suggestions.

In any case, let’s get into the meat of it. I hope you find something useful!

Networking

Security

  • I have to agree with Tom Hollingsworth that we often create backdoors by design simply out of our own laziness. I’ve heard it said—in fact I may have used the statement myself—that no amount of security can fix stupidity. That might be a bit strong, but it does apply to the “shortcuts” that we create for ourselves or our customers in our designs.

Servers/Hardware

  • Kevin Houston (who works for Dell) posted an article about a recent test report comparing power usage between Dell blades and Cisco UCS blades. If you’re comparing these two solutions, find a comparable report from Cisco and then draw your own conclusions. (Always get multiple views on a topic like this, because every vendor—and I know because I work for a vendor, too—will spin the report in their favor.)

Virtualization

That’s it for this time around. I hope that you have found something useful here. If anyone has any suggestions for sites/forums they’ve found helpful with data center-focused topics, I’d love for you to add that information in the comments.

Tags: , , , , , , , ,

I just finished reading a post on ZDNet titled “Are Hyper-V and App-V the new Windows Servers?” in which the author—Ken Hess—postulates that the rise of virtualization will shape the future of the Microsoft Windows OS such that, in his words:

The Server OS itself is an application. It’s little more than (or hopefully a little less than) Server Core.

The author also advises his readers that they “have to learn a new vocabulary” and that they’ll “deploy services and applications as workloads.”

Does any of this sound familiar to you?

It should. Almost 6 years ago, I was carrying on a blog conversation (with a web site that is now defunct) about the future of the OS. I speculated at that point that the general-purpose OS as we then knew it would be gone within 5 to 10 years. It looks like that prediction might be reasonably accurate. (Sadly, I was horribly wrong about Mac OS X, but everyone’s allowed to be wrong now and then aren’t they?)

It should further sound familiar because almost 5 years ago, Srinivas Krishnamurti of VMware wrote an article describing a new (at the time) concept. This new concept was the idea of a carefully trimmed operating system (OS) instance that served as an application container:

By ripping out the operating system interfaces, functions, and libraries and automatically turning off the unnecessary services that your application does not require, and by tailoring it to the needs of the application, you are now down to a lithe, high performing, secure operating system – Just Enough of the Operating System, that is, or JeOS.

The idea of the server OS as an application container—what Ken suggests in very Microsoft-centric terms in his article—is not a new idea, but it is good to see those outside of the VMware space opening their eyes to the possibilities that a full-blown general purpose OS might not be the best answer anymore. Whether it is Microsoft’s technology or VMware’s technology that drives this innovation is a topic for another post, but it is pretty clear to me that this innovation is already occurring and will continue to occur.

The OS is dead, long live the OS!

<aside>If this is the case—and I believe that it is—what does this portend for massive OS upgrades such as Windows 8 (and Server 2012)?</aside>

Tags: , , , , ,

Welcome to Technology Short Take #17, another of my irregularly-scheduled collections of various data center technology-related links, thoughts, and comments. Here’s hoping you find something useful!

Networking

  • I think it was J Metz of Cisco that posted this to Twitter, but this is a good reference to the various 10 Gigabit Ethernet modules.
  • I’ve spoken quite a bit about stretched clusters and their potential benefits. For an opposing view—especially regarding the use of stretched clusters as a disaster avoidance solution—check out this article. It’s a nice counterpoint, especially from the perspective of the network.
  • Anyone know anything about sFlow?
  • Here’s a good post on VXLAN that has some useful information. I’d just like to point out that VXLAN is really only intended to address Layer 2 communications “within” a vApp or a collection of VMs (perhaps a single organization’s VMs), and doesn’t do anything to address Layer 3 routing/accessibility for clients (or “consumers”) attempting to connect to those systems. For that, you’ll still need—at least today—technologies like OTV, LISP, and others.
  • A quick thought that I’m still exploring: what’s the impact of OpenFlow on technologies like VXLAN, NVGRE, and others? Does SDN eliminate the need for these technologies? I’d be curious to hear your thoughts.

Servers/Operating Systems

  • If you’ve adopted Mac OS X Lion 10.7, you might have noticed some problems connecting to older servers/NAS devices running AFP (AppleTalk Filing Protocol). This Apple KB article describes a fix. Although I’m running Snow Leopard now, I was running Lion on a new MacBook Pro and I can attest that this fix does work.
  • This Microsoft KB article describes how to extend the Windows Server 2008 evaluation period. I’ve found this useful for Windows Server 2008 instances in the lab that I need for longer 60 days but that I don’t necessarily want to activate (because they are transient).

Storage

  • Jason Boche blogged about a way to remove stubborn hosts from Unisphere. I’ve personally never seen this problem, but it’s nice to know how to address it should it occur.
  • Who would’ve thought that an HDD could serve as a cache for an SSD? Shouldn’t it be the other way around? Normally, that would probably be the case, but as described here there are certain instances and ways in which using an HDD as a cache for an SSD can improve performance.
  • Scott Drummonds wraps up his 3 part series on flash storage in part 3, which contains information on sizing flash storage. If you haven’t been reading this series, I’d recommend giving it a look.
  • Scott also weighs in on the flash as SSD vs. flash on PCIe discussion. I’d have to agree that interfaces are important, and the ability of the industry to successfully leverage flash on the PCIe bus is (today) fairly limited.
  • Henri updated his VNXe blog series with a new post on EFD and RR performance. No real surprises here, although I do have one question for Henri: is that your car in the blog header?

Virtualization

  • Interested in setting up host-only networking on VMware Fusion 4? Here’s a quick guide.
  • Kenneth Bell offers up some quick guidelines on when to deploy MCS versus PVS in a XenDesktop environment. MCS vs. PVS is a topic of some discussion on the vSpecialist mailing list as they have very different IOPs requirements and I/O profiles.
  • Speaking of VDI, Andre Leibovici has two articles that I wanted to point out. First, Andre does a deep dive on Video RAM in VMware View 5 with 3D; this has tons of good information that is useful for a VDI architect. (The note about the extra .VSWP overhead, for example, is priceless.) Andre also has a good piece on VDI and Microsoft Outlook that’s worth reading, laying out the various options for Outlook-related storage. If you want to be good at VDI, Andre is definitely a great resource to follow.
  • Running Linux in your VMware vSphere environment? If you haven’t already, check out Bob Plankers’ Linux Virtual Machine Tuning Guide for some useful tips on tuning Linux in a VM.
  • Seen this page?
  • You’ve probably already heard about Nick Weaver’s new “Uber” tool, a new VM alignment tool called UBERAlign. This tool is designed to address VM alignment, a problem with how guest file systems are formatted within a VMDK. For more information, see Nick’s announcement here.
  • Don’t disable DRS when you’re using vCloud Director. It’s as simple as that. (If you want to know why, read Chris Colotti’s post.)
  • Here’s a couple of great diagrams by Hany Michael on vCloud Director management pods (both public cloud and private cloud management).
  • People automatically assume that “virtualization” means consolidating multiple workloads onto a single physical server. However, virtualization is really just a layer of abstraction, and that layer of abstraction can be used in a variety of ways. I spoke about this in early 2010. This article (written back in March of 2011) by Brad Hedlund picks up on that theme to show another way that virtualization—or, as he calls it, “inverse virtualization”—can be applied to today’s data centers and today’s applications.
  • My discussion on the end of the infrastructure engineer generated some conversations, which is good. One of the responses was by Aaron Sweemer in which he discusses the new (but not new) “data layer” and expresses a need for infrastructure engineers to be aware of this data layer. I’d agree with a general need for all infrastructure engineers to be aware of the layers above them in the stack; I’m just not convinced that we all need to become application developers.
  • Here’s a great post by William Lam on the missing piece to creating your own vSEL cloud. I’ll tell you, William blogs some of the coolest stuff…I wish I could dig in as deep as he does in some of this stuff.
  • Here’s a nice look at the use of PowerCLI to help with the automation of DRS rules.
  • One of my projects for the upcoming year is becoming more knowledgeable and conversant with the open source Xen hypervisor and Citrix XenServer. I think that the XenServer Design Handbook is going to be a useful resource for that project.
  • Interested in more information on deploying Oracle databases on vSphere? Michael Webster, aka @vcdxnz001 on Twitter, has a lengthy article with lots of information regarding Oracle on vSphere.
  • This VMware KB article describes how to enable centralized logging for vCloud Director cells. This is particularly important for HA environments, where VMware’s recommended HA strategy involves the use of multiple vCD cells.

I guess I should wrap it up here, before this post gets any longer. Thanks for reading this far, and feel free to speak up in the comments!

Tags: , , , , , , , , , , , , , ,

A colleague on the EMC vSpecialist team (many of you probably know Chris Horn) sent me this information on an issue he’d encountered. Chris wanted me to share the information here in the event that it would help others avoid the time he’s spent troubleshooting this issue.

What Chris has found is that there is a flaw in Windows Server 2008 and Windows 7 that causes “orphaned NICs” when using the VMXNET3 network driver. There appear to be three cases in which this problem appears:

  1. When you deploy an OVF or OVA of Windows Server 2008 or Windows 7
  2. When you clone a VM running Windows Server 2008 or Windows 7 (this also applies to deploying from a template)
  3. When deploying a vApp within vCD that contains Windows Server 2008 or Windows 7 (this can cause quite a bit of chaos)

Up until now, there were two available workarounds that appeared to resolve this issue:

  1. Use the Intel E1000 driver, which doesn’t cause the same problem. However, it’s unclear how well the E1000 performs with 10 Gigabit Ethernet uplinks.
  2. Use a statically assigned MAC address, which of course doesn’t scale very well in large (and/or dynamic) environments. This is also very difficult to do in vCloud Director (apparently even rising to the level of having to hack the vCD database).

It would appear that the behavior Chris describes is captured in this VMware KB article. There is also a hotfix available in this Microsoft KB article; Chris has tested this hotfix and indicates that it does indeed fix the problem. The referenced Microsoft KB article and the referenced VMware KB article also reference this third Microsoft KB article, further leading me to believe that the articles are indeed related to the same underlying issue.

If you are deploying Windows Server 2008 and/or Windows 7-based VMs in your environment, you might want to take a look at the linked VMware KB and Microsoft KB articles to be sure that you don’t run into the same sorts of problems Chris was experiencing.

Thanks Chris!

Tags: , , , , ,

I had a reader contact me and ask if he could ask the rest of the readers a vSphere design question. I thought that it might start an engaging and interesting discussion around vSphere design, so here’s the reader’s scenario and question(s):

I am looking to design an ESXi environment to potentially deploy Microsoft SQL servers that require extreme high availability at a scale of 50+ MSCS/WFC clusters. We’d like to do this in an ESXi 4.1 environment using Windows Server 2008 R2, MSCS/WFC, and SQL Server 2008 with Fibre Channel storage. I’ve done this in the past on a smaller scale (3-4 total clusters) and know most of the caveats such as proper heartbeat requirements, no HA/DRS support, physical RDM compatibility mode requirements for shared disks, eageredzerothick OS disks, no round-robin multipathing, etc.

The issues I’ve run into in the past revolved around managing these virtual servers differently than other guests since they couldn’t readily be moved between hosts. We also found that the reboot time on these hosts with MSCS/WFC using RDMs was extremely slow (in excess of 45 minutes to fully reboot, we could speed this up by pulling the fibre cables).

Some of the design considerations I’m curious about would include:

  • Where do people put the VMFS/RDM file links?
  • Do people put the guests in different clusters? Is this even possible?
  • How do people separate active/passive nodes? Do people use host based affinity rules to accomplish this?
  • Do reboot times on hosts with lots of RDMs get linearly slower as more MSCS/WFC RDMs are presented to a host?
  • Do people really push back and try to get database mirroring instead of clustering? If so, what caveats around this have people encountered?

I’m just curious how others are handling situations like this or if anyone is really doing it at scale.

Thoughts? What do you guys think about this reader’s situation? I’d love for this to jump start a conversation here with recommendations, experiences, additional questions, etc. vSphere design is a topic that lots of readers are tackling, either for certification or just because that’s their job, and the discussion around this scenario could end up exposing some useful resources and information.

So jump in with your thoughts in the comments below! I only ask that you provide full disclosure with regards to vendor affiliations, where applicable. Thanks, and I look forward to seeing some of the responses.

Tags: , , , , ,

« Older entries