Virtualization

You are currently browsing articles tagged Virtualization.

This is part 16 of the Learning NSX series, in which I will show you how to configure VMware NSX to route to multiple external VLANs. This configuration will allow you to have logical routers that could be uplinked to any of the external VLANs, providing additional flexibility for consumers of NSX logical networks.

Naturally, this post builds on all the previous entries in this series, so I encourage you to visit the Learning NVP/NSX page for links to previous posts. Because I’ll specifically be discussing NSX gateways and routing, there are some posts that are more applicable than others; specifically, I strongly recommend reviewing part 6, part 9, part 14, and part 15. Additionally, I’ll assume you’re using VMware NSX with OpenStack, so reviewing part 11 and part 12 might also be helpful.

Ready? Let’s start with a very quick review.

Review of NSX Gateway Connectivity

You may recall from part 6 that the NSX gateway appliance is the piece of VMware NSX that handles traffic into or out of logical networks. As such, the NSX gateway appliance is something of a “three-legged” appliance:

  • One “leg” (network interface) provides management connectivity among the gateway appliance and the nodes in the NSX controller cluster
  • One “leg” provides connectivity to the transport network, which carries the encapsulated logical network traffic
  • One “leg” is the uplink and provides connectivity to physical networks

That’s the physical architecture. From a more logical architecture, you may recall from part 15 that NSX gateway appliances are combined into an NSX gateway service, and the NSX gateway service hosts one or more logical routers. Neither the NSX gateway appliance nor the NSX gateway service are visible to the consumers of the environment; they are only visible to the operators and/or administrators. Consumers only see logical routers, which also serve as the default gateway/default route/IP gateway to/from their logical networks.

The configurations I’ve shown you/discussed so far have assumed the presence of only a single uplink. NSX is not constrained to having only a single uplink, nor is it constrained to having only a single physical network on an uplink. If you need multiple networks on the outside of an NSX gateway appliance, you can either use multiple uplinks, or you can use multiple VLANs on an uplink. In this post I’ll show you how to use multiple VLANs on the outside. This diagram provides a graphical representation of what the configuration will look like.

Multiple VLANs with NSX Gateways

(Click here for a larger version.)

Setting up this configuration will involve three steps:

  1. Configuring the uplink to carry multiple VLANs.
  2. Verifying the gateway configuration.
  3. Setting up the external networks in OpenStack.

Let’s take a look at each of these sections.

The process for this step will vary, mostly because it involves configuring your physical network to pass the appropriate VLANs to the NSX gateway appliance. I’ve written a few articles in the past that might be helpful here:

Although the titles of some of these articles seem to imply they are VMware-specific, they aren’t—the physical switch configuration is absolutely applicable here.

Verifying the Gateway Configuration

No special configuration is required on the NSX gateway appliance. As you probably already know, the NSX gateway appliance leverages Open vSwitch (OVS). OVS ports are, by default, trunk ports, and therefore will carry the VLAN tags passed by a properly configured physical switch. Further, the OVS bridge for the external uplink (typically breth1 or breth2) doesn’t need an IP address assigned to it. This is because the IP address(es) for logical routing are assigned to the logical routers, not the NSX gateway appliance’s interface. If you do have IP addresses assigned to the external uplink interface, you can safely remove it. If you prefer to leave it, that’s fine too.

As a side note, the NSX gateway appliances do support configuring VLAN sub-interfaces using a command like this:

add network interface <physical interface> vlan <VLAN ID>

Thus far, I haven’t found a need to use VLAN sub-interfaces when using multiple VLANs on the outside of an NSX gateway appliance, but I did want to point out that this functionality does indeed exist.

Setting up the External Networks

This is the only moderately tricky part of the configuration. In this step, you’ll prepare multiple external networks that can be used as uplinks for logical routers.

The command you’ll want to use (yes, you have to use the CLI—this functionality isn’t exposed in the OpenStack Dashboard web interface) looks like this:

neutron net-create <network name> -- 
--router:external=True --provider:network_type l3_ext
--provider:segmentation_id <VLAN ID> --provider:physical_network=<NSX gateway service UUID> --shared=True

For the most part, this command is pretty straightforward, but let’s break it down nevertheless:

  • The router:external=True tells Neutron this network can be used as the external (uplink) connection on a logical router.
  • The provider:network_type l3_ext is an NSX-specific extension that enables Neutron to work with the layer 3 (routing) functionality of the NSX gateway appliances.
  • The provider:segmentation_id portion provides the VLAN ID that should be associated with this particular external network. This VLAN ID should be one of the VLAN IDs that is trunked across the connection from the physical switch to the NSX gateway appliance.
  • The provider:physical_network portion tells OpenStack which specific NSX gateway service to use. This is important to note: this command references an NSX gateway service, not an NSX gateway appliance. Refer to part 15 if you’re unclear on the difference.

You’d repeat this command for each external network (VLAN) you want connected to NSX and usable inside OpenStack.

For each Neutron network, you’ll also need a Neutron subnet. The command to create a subnet on one of these external networks looks like this:

neutron subnet-create <network name> <CIDR>
--name <subnet name> --enable_dhcp=False
--allocation-pool start=<starting IP address>,end=<ending IP address>

The range of IP addresses specified in the allocation_pool portion of the command becomes the range of addresses from this particular subnet that can be assigned as floating IPs. It is also the pool of addresses from which logical routers will pull an address when they are connected to this particular external network.

When you’re done creating an external network and subnet for each VLAN on the outside of the NSX gateway appliance, then your users (consumers) can simply create logical routers as usual, and then select from one of the external networks as an uplink for their logical routers. This assumes you included the shared=True portion of the command when creating the network; if desired, you can omit that and instead specify a tenant ID, which would assign the external network to a specific tenant only.

I hope you find this post to be useful. If you have any questions, corrections, or clarifications, please speak up in the comments. All courteous comments are welcome!

Tags: , , , , , ,

Welcome to Technology Short Take #45. As usual, I’ve gathered a collection of links to various articles pertaining to data center-related technologies for your enjoyment. Here’s hoping you find something useful!

Networking

  • Cormac Hogan has a list of a few useful NSX troubleshooting tips.
  • If you’re not really a networking pro and need a “gentle” introduction to VXLAN, this post might be a good place to start.
  • Also along those lines—perhaps you’re a VMware administrator who wants to branch into networking with NSX, or you’re a networking guru who needs to learn more about how this NSX stuff works. vBrownBag has been running a VCP-NV series covering various objectives from the VCP-NV exam. Check them out—objective 1, objective 2, objective 3, and objective 4 have been posted so far.

Servers/Hardware

  • I’m going to go out on a limb and make a prediction: In a few years time (let’s say 3–5 years), Intel SGX (Software Guard Extensions) will be regarded as important if not more important than the virtualization extensions. What is Intel SGX, you ask? See here, here, and here for a breakdown of the SGX design objectives. Let’s be real—the ability for an application to protect itself (and its data) from rogue software (including a compromised or untrusted operating system) is huge.

Security

  • CloudFlare (disclaimer: I am a CloudFlare customer) recently announced Keyless SSL, a technique for allowing organizations to take advantage of SSL offloading without relinquishing control of private keys. CloudFlare followed that announcement with a nitty gritty technical details post that describes how it works. I’d recommend reading the technical post just to get a good education on how encryption and TLS work, even if you’re not a CloudFlare customer.

Cloud Computing/Cloud Management

  • William Lam spent some time working with some “new age” container cluster management tools (specifically, govmomi, govc CLI, and Kubernetes on vSphere) and documented his experience here and here. Excellent stuff!
  • YAKA (Yet Another Kubernetes Article), this time looking at Kubernetes on CoreOS on OpenStack. (How’s that for buzzword bingo?)
  • This analytical evaluation of Kubernetes might be helpful as well.
  • Stampede.io looks interesting; I got a chance to see it live at the recent DigitalOcean-CoreOS meetup in San Francisco. Here’s the Stampede.io announcement post.

Operating Systems/Applications

  • Trying to wrap your head around the concept of “microservices”? Here’s a write-up that attempts to provide an introduction to microservices. An earlier blog post on cloud native software is pretty good, too.
  • Here’s a very nice collection of links about Docker, ranging from how to use Docker to how to use the Docker API and how to containerize your application (just to name a few topics).
  • Here’a a great pair of articles (part 1 and part 2) on microservices and Platform-as-a-Service (PaaS). This is really good stuff, especially if you are trying to expand your boundaries learning about cloud application design patterns.
  • This article by CenturyLink Labs—which has been doing some nice stuff around Docker and containers—talks about how to containerize your legacy applications.
  • Here’s a decent write-up on comparing LXC and Docker. There are also some decent LXC-specific articles on the site as well (see the sidebar).
  • Service registration (and discovery) in a micro-service architecture can be challenging. Jeff Lindsay is attempting to help address some of the challenges with Registrator; more information is available here.
  • Unlike a lot of Docker-related blog posts, this post by RightScale on combining VMs and containers for better cloud portability is a well-written piece. The pros and cons of using containers are discussed fairly, without hype.
  • Single-process containers or multi-process containers? This site presents a convincing argument for multi-process containers; have a look.
  • Tired of hearing about containers yet? Oh, come on, you know you love them! You love them so much you want to run them on your OS X laptop. Well…read this post for all the gory details.

Storage

  • The storage aspect of Docker isn’t typically discussed in a lot of detail, other than perhaps focusing on the need for persistent storage via Docker volumes. However, this article from Red Hat does a great job (in my opinion) of exploring storage options for Docker containers and how these options affect performance and scalability. Looks like OverlayFS is the clear winner; it will be great when OverlayFS is in the upstream kernel and supported by Docker. (Oh, and if you’re interested in more details on the default device mapper backend, see here.)
  • This is a nice write-up on Riverbed SteelFusion, aka “Granite.”

Virtualization

  • Azure Site Recovery (ASR) is similar to vCloud Air’s Disaster Recovery service, though obviously tailored toward Hyper-V and Windows Server (which is perfectly fine for organizations that are using Hyper-V and Windows Server). To help with the setup of ASR, the Azure team has a write-up on the networking infrastructure setup for Microsoft Azure as a DR site.
  • PowerCLI in the vSphere Web Client, eh? Interesting. See Alan Renouf’s post for full details.
  • PernixData recently released version 2.0 of FVP; Frank Denneman has all the details here.

That’s it for this time, but be sure to visit again for future episodes. Until then, feel free to start (or join in) a discussion in the comments below. All courteous comments are welcome!

Tags: , , , , , , , , , , , , ,

This post will provide a quick introduction to a tool called Vagrant. Unless you’ve been hiding under a rock—or, more likely, been too busy doing real work in your data center to pay attention—you’ve probably heard of Vagrant. Maybe, like me, you had some ideas about what Vagrant is (or isn’t) and what it does (or doesn’t) do. Hopefully I can clear up some of the confusion in this post.

In its simplest form, Vagrant is an automation tool with a domain-specific language (DSL) that is used to automate the creation of VMs and VM environments. The idea is that a user can create a set of instructions, using Vagrant’s DSL, that will set up one or more VMs and possibly configure those VMs. Every time the user uses the precreated set of instructions, the end result will look exactly the same. This can be beneficial for a number of use cases, including developers who want a consistent development environment or folks wanting to share a demo environment with other users.

Vagrant makes this work by using a number of different components:

  • Providers: These are the “back end” of Vagrant. Vagrant itself doesn’t provide any virtualization functionality; it relies on other products to do the heavy lifting. Providers are how Vagrant interacts with the products that will do the actual virtualization work. A provider could be VirtualBox (included by default with Vagrant), VMware Fusion, Hyper-V, vCloud Air, or AWS, just to name a few.
  • Boxes: At the heart of Vagrant are boxes. Boxes are the predefined images that are used by Vagrant to build the environment according to the instructions provided by the user. A box may be a plain OS installation, or it may be an OS installation plus one or more applications installed. Boxes may support only a single provider or may support multiple providers (for example, a box might only work with VirtualBox, or it might support VirtualBox and VMware Fusion). It’s important to note that multi-provider support by a box is really handled by multiple versions of a box (i.e, a version supporting VirtualBox, a version supporting AWS, or a version supporting VMware Fusion). A single box supports a single provider.
  • Vagrantfile: The Vagrantfile contains the instructions from the user, expressed in Vagrant’s DSL, on what the environment should look like—how many VMs, what type of VM, the provider, how they are connected, etc. Vagrantfiles are so named because the actual filename is Vagrantfile. The Vagrant DSL (and therefore Vagrantfiles) are based on Ruby.

Once of the first things I thought about as I started digging into Vagrant was that Vagrant would be a tool that would help streamline moving applications/code from development to production. After all, if you had providers for Vagrant that supported both VirtualBox and VMware vCenter, and you had boxes that supported both providers, then you could write a single Vagrantfile that would instantiate the same environment in development and in production. Cool, right? In theory this is possible, but in talking with some others who are much more familiar with Vagrant than I am it seems that in practice this is not necessarily the case. Because support for multiple providers is handled by different versions of a box (as outlined above), the boxes may be slightly different and therefore may not produce the exact same results from a single Vagrantfile. It is possible to write the Vagrantfile in such a way as to recognize different providers and react differently, but this obviously adds complexity.

With that in mind, it seems to me that the most beneficial uses of Vagrant are therefore to speed up the creation of development environments, to enable version control of development environments (via version control of the Vagrantfile), to provide some reasonable level of consistency across multiple developers, and to make it easier to share development environments. (If my conclusions are incorrect, please speak up in the comments and explain why.)

OK, enough of the high-level theory. Let’s take a look at a very simple example of a Vagrantfile:

(Click here if you can’t see the code block above.)

This Vagrantfile sets the box (“ubuntu/precise64″), the box URL (retrieves from Canonical’s repository of cloud images), and then sets the “/vagrant” directory in the VM to be shared/synced with the current (“.”) directory on the host—in this case, the current directory is the directory where the Vagrantfile itself is stored.

To have Vagrant then use this set of instructions, run this command from the directory where the Vagrantfile is sitting:

vagrant up

You’ll see a series of things happen; along the way you’ll see a note that the machine is booted and ready, and that shared folders are getting mounted. (If you are using VirtualBox and the box I’m using, you’ll also see a warning about the VirtualBox Guest Additions version not matching the version of VirtualBox.) When it’s all finished, you’ll be deposited back at your prompt. From there, you can easily log in to the newly-created VM using nothing more than vagrant ssh. That’s pretty handy.

Other Vagrant commands include:

  • vagrant halt to shut down the VM(s)
  • vagrant suspend to suspend the VM(s), use vagrant resume to resume
  • vagrant status to display the status of the VM(s)
  • vagrant destroy to destroy (delete) the VM(s)

Clearly, the example I showed you here is extremely simple. For an example of a more complicated Vagrantfile, check out this example from Cody Bunch, which sets up a set of VMs for running OpenStack. Cody and his co-author Kevin Jackson also use Vagrant extensively in their OpenStack Cloud Computing Cookbook, 2nd Edition, which makes it easy for readers to follow along.

I said this would be a quick introduction to Vagrant, so I’ll stop here for now. Feel free to post any questions in the comments, and I’ll do my best to answer them. Likewise, if there are any errors in the post, please let me know in the comments. All courteous comments are welcome!

Tags: , ,

This is a live blog of session DATS004, titled “Bare-Metal, Docker Containers, and Virtualization: The Growing Choices for Cloud Applications.” The speaker is Nicholas Weaver (yes, that Nick Weaver, who now works at Intel).

Weaver starts his presentation by talking about “how we got here”, discussing the various technological shifts that have affected the computing landscape over the years. Weaver includes a discussion of the drivers behind virtualization as well as the pros and cons of virtualization.

That, naturally, leads to a discussion of containers. Containers are not all that new—Solaris Zones is a form of containers that existed back in 2004. Naturally, the recent hype associated with Docker has, according to Weaver, rejuvenated interest in the concept of containers.

Before Weaver gets too far into containers, he first provides a background of some of the core containerization pieces. This includes cgroups (the ability to control resource allocation/utilization), which is built into the Linux kernel. Namespace isolation is also important, which provides full process isolation (so that one process can’t see processes in another namespace). Namespace isolation isn’t just for processes; there’s also isolation for network entities, mounts, and users. LXC is a set of user-space tools that attempted to make using these constructs easier, but it hasn’t (until recently) been easy to really leverage these constructs.

Weaver next takes this relatively abstract discussion and makes it a bit more concrete with a specific example of how a microservice architecture would look under virtualization (OS instance, microservice libraries, and microservice itself) and well as under containers (shared OS instance and shared libraries plus microservice itself). Weaver talks about the “instant start” attribute of a container, but puts that in the context of the lifetime of the workload that’s running in the container. Start-up times don’t really matter for long-lived workloads, but for temporary, ephemeral workloads start-up times do matter. The pattern of “container on VM” is also mentioned by Weaver as another design pattern that some people use.

Next Weaver provides a quick list of pros and cons of containers:

  • Pros: faster lifecycle vs. virtual machines; containers what is running within the OS; ideal for homogenous application stacks on Linux; almost non-existent overhead
  • Cons: very complex to configure (by itself, absent some sort of orchestration system or operating at scale); currently much weaker security isolation than VMs; applications must run on Linux (because Windows doesn’t have the same container technologies)

Next, Weaver transitions the discussion to focus on Docker specifically. Weaver describes Docker as “an easy button for containers,” making the underlying containerization constructs (cgroups, namespaces, etc.) easier to use. Docker is simpler and easier than LXC (where multiple binaries were involved). Weaver believes that Docker images—which he describes as an ordered set of actions to build a container—are the real game-changer. Weaver’s discussion of Docker images leads to a review of a Dockerfile, which is a DSL (domain specific language) for creating Docker images. Docker images are built on a series of layers; underlying layers could be “just” OS images (like Ubuntu or CentOS), but they could also be customized builds that contain applications and/or data.

Image registries are how users can create images and share images with other users. The public Docker Hub is an example of an image registry.

The discussion now transitions into a quick review of the underlying Docker architecture. There is a Docker daemon that runs on Linux; the Docker client can be run elsewhere. The Docker client communicates with the Docker daemon (although you should note that in many cases the daemon listens on a local socket, which means using a Docker client remotely over the network won’t work).

The innovations that Weaver attributes to Docker include: images (like templates for VMs, and the use of copy-on-write makes them behave like code); API and CLI tools for managing container deployments; reduced complexity around deploying and managing containers; and support for namespaces and resource limits.

Weaver provides a more concrete example of how Docker can change a developer’s process for creating code. Here Weaver’s DevOps background really starts to show, as he discusses how Docker and containers would help streamline CI/CD operations.

Next up are the gotchas with containers. Trust is one gotcha; can we trust that one container won’t affect other containers? The answer, according to Weaver, is “it depends.” You still need to follow current recommended practices, such as no root access, host-level patches, auditing, and being aware of the default settings (which might be dangerous, if you aren’t aware). One way to address some of these concerns is to use VMs to provide strong security isolation between containers that need a stronger level of isolation than the standard container mechanisms can provide.

Intel, of course, is working on making containers better:

  • Security (Intel AES-NI, INtel TXT/TCP, Intel SGX)
  • Performance/flexibility (Intel VT-x/VT-d/VT-c)

Weaver wraps up the session with a quick summary of the key points from the session and some Q&A.

Tags: , , , ,

Welcome to Technology Short Take #44, the latest in my irregularly-published series of articles, links, ideas, and thoughts about various data center-related technologies. Enjoy!

Networking

  • One of the original problems with the VXLAN IETF specification was that it (deliberately) didn’t include any control plane information; as a result, the process of mapping MAC addresses to VTEPs (VXLAN Tunnel Endpoints) wasn’t defined, and the early implementations relied on multicast to handle this issue. To help resolve this issue, Cumulus Networks (and possibly Metacloud, I’m not sure of their involvement yet) has release an open source project called vxfld. As described in this Metacloud blog post, vxfld is designed to “handle VXLAN traffic from any operationg system or hardware platform that adheres to the IETF Internet-Draft for VXLAN”.
  • Nir Yechiel recently posted part 1 of a discussion on the need for network overlays. This first post is more of a discussion of why VLANs and VLAN-based derivatives aren’t sufficient, and why we should be looking to routing (layer 3) constructs instead. I’m looking forward to part 2 of the series.
  • One ongoing discussion in the network industry these days—or so it seems—is the discussion about the interaction between network overlays and the underlying transport network. Some argue that tight integration is required; others point to streaming video services and VoIP running across the Internet and insist that no integration or interaction is needed. In this post, Scott Jensen argues in favor of the former—that SDN solutions shouldn’t just manage network overlays, but should also manage the configuration of the physical transport network as well. I’d love to hear from more networking pros (please disclose company affiliations) about their thoughts on this matter.
  • I like the distinction made here between network automation and SDN.
  • Need to get a better grasp on OpenFlow? Check out OpenFlow basics and OpenFlow deep-dive.
  • Here’s a write-up on connecting Docker containers using VXLAN. I think there’s a great deal of promise for OVS in containerized environments, but what’s needed is better/tighter integration between OVS and container solutions like Docker.

Servers/Hardware

  • Is Intel having second thoughts about software-defined infrastructure? That’s the core question in this blog post, which explores the future of Intel in a software-defined world and the increasing interest in non-x86 platforms like ARM.
  • On the flip side, proponents who claim that platforms like ARM and others are necessary in order to move forward with SDN and NFV initiatives should probably read this article on 80 Gbps performance from an off-the-shelf x86 server. Impressive.

Security

  • It’s nice to see that work on OpenStack Barbican is progressing nicely; see this article for a quick overview of the project and an update on the status.

Cloud Computing/Cloud Management

  • SDN Central has a nice write-up on the need for open efforts in the policy space, which includes the Congress project.
  • The use of public cloud offerings as disaster recovery targets is on the rise; note this article from Microsoft on how to migrate on-premises workloads to Azure using Azure Site Recovery. VMware has a similar offering via the VMware vCloud Hybrid Service recovery-as-a-service offering.
  • The folks at eNovance have a write-up on multi-tenant Docker with OpenStack Heat. It’s an interesting write-up, but not for the faint of heart—to make their example work, you’ll need the latest builds of Heat and the Docker plugin (it doesn’t work with the stable branch of Heat).
  • Preston Bannister took a look at cloud application backup in OpenStack. His observations are, I think, rational and fair, and I’m glad to see someone paying attention to this topic (which, thus far, I think has been somewhat ignored).
  • Interested in Docker and Kubernetes on Azure? See here and here for more details.
  • This article takes a look at Heat-Translator, an effort designed to provide some interoperability between TOSCA and OpenStack HOT documents for application deployment and orchestration. The portability of orchestration resources is one of several aspects you’ll want to examine as you progress down the route of fully embracing a cloud computing operational model.

Operating Systems/Applications

  • Looks like we have another convert to Markdown—Anthony Burke recently talked about how he uses Markdown. Regular readers of this site know that I do almost all of my content generation using MultiMarkdown (a variation of Markdown with some expanded syntax options). Here’s a post I recently published on some useful Markdown tools for OS X.
  • Good to see that Ivan Pepelnjak thinks infrastructure as code makes sense. I guess that means the time I’ve spent with Puppet (you can browse Puppet-related posts here) wasn’t a waste.
  • I don’t know if I’ve mentioned this before (sorry if that’s the case), but I’m liking this “NIX4NetEng” series going on over at Nick Buraglio’s site (part 1, part 2, and part 3).
  • Mike Foley has a blog post on how to go from zero to Windows domain controller in only 4 reboots. Handy.

Storage

Virtualization

  • Running Hyper-V with Linux VMs? Ben Armstrong details what versions of Linux support the various Hyper-V features in this post.
  • Here’s a quick write-up on running VMs with VirtualBox 4.3 on a headless Ubuntu 14.04 LTS server.
  • Nested OS X guest on top of nested ESXi on top of VMware Fusion? Must be something William Lam’s tried. Go have a look at his write-up.
  • Here’s a quick update on Nova-Docker, the effort in OpenStack to allow users to deploy Docker containers via Nova. I’m not yet convinced that treating Docker as a hypervisor in Nova is the right path, but we’ll see how things develop.
  • This post is a nice write-up on the different ways to connect a Docker container to a local network.
  • Weren’t able to attend VMworld US in San Francisco last week? No worries. If you have access to the recorded VMworld sessions, check out Jason Boche’s list of the top 10 sessions for a priority list of what recordings to check out. Or need a recap of the week? See here (one of many recap posts, I’m sure).

That’s it this time around; hopefully I was able to include something useful for you. As always, all courteous comments are welcome, so feel free to speak up in the comments. In particular, if there is a technology area that I’m not covering (or not covering well), please let me know—and suggestions for more content sources are certainly welcome!

Tags: , , , , , , , , , , , , ,

This is a semi-liveblog of the day 2 keynote at VMworld 2014 in San Francisco.

Ben Fathi, CTO of VMware, takes the stage to start the keynote, and goes through a recap of yesterday’s announcements: EVO and EVO:RAIL, vCloud Air rebranding and the availability of new services, VMware Integrated OpenStack (VIO), and new product versions (vCloud Suite 5.8, betas of vSphere and VSAN).

Fathi re-emphasizes that VMware’s SDDC vision is an architecture, one built on the “power of AND.” vCloud Suite (and vCloud Air) is a manifestation of the SDDC vision/architecture. VMware’s SDDC vision/architecture delivers the power of “AND.”

Fathi transfers the baton to Sanjay Poonen, EVP for End-User Computing at VMware. Poonen is a high-energy speaker, and his enthusiasm for end-user computing is clear (and contagious). The world of end-user computing is changing; not only users are changing, but also apps and data are changing. Poonen talks about VMware’s EUC vision: enabling users to work “at the speed of life.” Today’s environments are far more heterogenous than in the past. Poonen uses an example of a rooftop infinity pool in Singapore as a metaphor for the “three foundations” of EUC: SDDC, workspace services (authentication and content collaboration, for example), and then the desktop/mobile/cloud experience.

Poonen goes on to talk about VMware’s desktop leadership. Horizon delivers a unified application publishing and VDI solution. Next is Desktop-as-a-Service (DaaS), via Desktone, and Poonen mentions DaaS via vCloud Air. Next Poonen talks about real-time application delivery via the CloudVolumes acquisition. Finally, Poonen discusses the rich user experience, and he announces a partnership between Google, NVIDIA, and VMware, and rolls into a video about the results of the partnership. The partnership enables users of VMware Horizon View on VMware vSphere on next-generation Google Chromebooks to utilize massively graphics-intensive applications with outstanding performance. (This is a pretty cool announcement, IMHO.)

The second major pillar that Poonen discusses is enterprise mobility management. It’s not just about application management, but also about content management, email management, and device management. Poonen brings onto stage Kevin Ichhpurani from SAP, to discuss a partnership between VMware and SAP. The partnership brings together AirWatch and SAP Secure Mobile for the secure delivery of applications to mobile devices.

The third major pillar that Poonen discusses is content collaboration. This means “anytime/anywhere” access, hybrid deployment, enterprise-grade functionality/support. Poonen talks about how United Airlines is transforming their operations uses mobile devices (supported by Apple devices and AirWatch by VMware). Poonen talks about all the integrations that are present: integrations between SDDC and workspace services, integration between desktop services and content collaboration. Poonen revisits the VMware Workspace Suite announcement from yesterday, drawing a comparison between vCloud Suite for SDDC and the Workspace Suite for EUC.

Poonen mentions that VMware is being designated as the leader by GigaOm and Radicati Group.

At this point, Poonen brings out Kit Colbert, CTO for EUC at VMware. Colbert is well-known to the VMworld crowd and is a veteran at VMware. Colbert starts his discussion using the idea of a “mobile cloud architecture” as the basis for the discussion. Colbert starts the first demo of the day by framing the demo around the concept of a doctor moving through various devices and applications over the course of a day. The demo spans a Mac in the doctor’s office, an iPad that he carries with him, and a thin client leveraging rich 3D graphics functionality. AirWatch Secure Content Locker facilitates some fictional collaboration between Colbert and Poonen (who is also a doctor in this example) across different security zones. The summary of the demo is that all the applications were in one place, data followed the user, and 3D images were available on any device.

Next Colbert takes the audience into a demo of CloudVolumes, and how CloudVolumes makes application delivery fast and seamless (like provisioning applications via AirWatch to a mobile device). In the demo, Colbert provisions Office to a VDI desktop almost instantaneously.

Application delivery is important, but desktop delivery and desktop provisioning is also important. Colbert next dives into more detail on Project Fargo, which enables vastly faster provisioning of virtual desktops by via cloning process. The clone is very lightweight because it shares not only disk but also memory with the original. Colbert begins to tie things today: VMware uses Project Fargo to quickly clone the desktop, and then use CloudVolumes to attach applications to that desktop. When the desktop is no longer needed, the desktop is discarded. This radically simplifies the administration and management of desktops and applications. Colbert name-drops a new project name—Project Meteor—but doesn’t go into great level of detail on exactly what Meteor encompasses.

Poonen and Colbert turn the stage over to Raghu Raghuram, who heads up VMware’s SDDC efforts. Raghuram reviews the achievements of the core components of the SDDC architecture—compute (via vSphere, including the vSphere 6.0 beta announced yesterday), storage (via VSAN), networking (via NSX—go VCDX-NV!), and management (via the newly-rebranded vRealize suite). Once again, Raghuram talks about SDDC as “the power of AND”.

Raghuram next transitions into a review of the “three paths” to SDDC that were introduced by Gelsinger yesterday: build your own, converged infrastructure, and hyper-converged infrastructure. Naturally, this leads to a discussion of the EVO family, especially EVO:RAIL. Fathi takes the stage again and prompts Raghuram into discussing how quickly EVO:RAIL can be turned up (up and running in 15 minutes, scales up to 100 VMs or 250 desktop VMs, supports non-disruptive upgrades), and that leads into a demo of EVO:RAIL. The hardware specification for EVO:RAIL is a 2U, four-node enclosure (think Dell C6100 or similar hardware form factor). EVO:RAIL building blocks will automatically discover each other, allowing users to easily build 16 node clusters (four EVO:RAIL blocks). EVO:RAIL also offers a greatly simplified web-based UI, although the vSphere Web Client is still available for use. EVO:RAIL includes vSphere and VSAN, and includes Log Insight as well.

EVO:RACK is a larger-scale implementation that builds on EVO:RAIL and includes all of the vCloud Suite. It starts at half a rack, and scales to multiple racks (no mention I caught of exactly how many racks). VMware is shooting for being up and running deploying applications in less than 2 hours for EVO:RACK. EVO:RACK, of course, is still in technical preview.

Next Raghuram moves onto talking about VMware Integrated OpenStack (VIO), which was announced yesterday as being available in beta. Over the next few minutes he discusses VMware’s involvement in OpenStack, and reviews why running OpenStack on VMware is the best choice for deploying OpenStack today. It doesn’t hurt that VMware offers a single point of contact for deploying OpenStack on VMware using VIO, although VMware continues to partner with companies like Canonical, HP, SuSE, and Mirantis for OpenStack deployments as well. (HP announced yesterday that HP Helion will support vSphere and NSX.)

From there, Raghuram transitions to a discussion of traditional applications and next-generation applications. This leads to mention of support for 4 vCPUs with VMware Fault Tolerance in the vSphere 6 beta. (This expansion of support is a long time coming.) Cross-vCenter vMotion is also coming in vSphere 6, as well as long-distance vMotion (coast-to-coast vMotion mentioned as an example). Raghuram transitions to Fathi for a discussion of next-generation applications.

Fathi talks about the use of microservices as the basis for next-generation applications, which clearly indicates he will be talking about containers. Fathi paints containers as “having been around for years,” including pointing to a feature in Windows Data Center Server 15 years ago. However, Docker has clearly captured the attention of developers, and so Fathi talks about where Docker fits. This leads into the “containers without compromise” message, which means that running containers on VMs on VMware’s SDDC architecture is the best way to deliver containerized applications and microservices today. Fathi mentions VMware working closely with Docker and Kubernetes, and explains the concept of a Kubernetes pod as a collection of containers. The value that VMware offers in a Docker/Kubernetes environment includes persistent storage via VSAN, pod mobility via vMotion/DRS, enhanced security via the VM operational model, and the rich ecosystem of tools supporting VMware virtualized infrastructure. Fathi re-iterates that SDDC is a single platform for running traditional applications as well as next-generation applications, and reminds attendees that CloudFoundry has supported containers (via Warden) for a number of years. (In other words, containers are not new to VMware.) VMware announces today that VMware is working with Docker, Google, and Pivotal to make containers a “first-class” citizen in the VMware SDDC architecture.

This leads into a demo by Fathi showing the use of vCAC (vRealize Automation?) deploying a blueprint. The demo shows CoreOS and Kubernetes deploying Vulcan, which is a service registration/service discovery tool. Raghuram asks about Project Fargo, and Fathi talks about how Fargo can help make containers even better than on bare metal through the use of technologies like those in Project Fargo.

Next up, Raghuram talks about the vRealize suite of management applications and tools. vRealize will be available as an on-premises implementation as well as available via vCloud Air in a SaaS model. I think that the value of this second approach is possibly being overlooked by a good number of conference attendees, but it is quite a significant announcement in my humble opinion.

Fathi takes over again to discuss the need for policy in the SDDC architecture. Policy can (and should) be applied not only to networking, but also to storage, compute, and other areas. This leads into a demo, hosted within vCAC (vRealize Automation?), and shows how various policies like storage policies are implemented to affect how applications are deployed within an SDDC architecture. (There is, of course, another side of policy that isn’t being mentioned here, and that’s what Congress attempts to address. But that’s another story for another day.) There is a nice demo of integrations between vCAC and NSX, where vCAC creates NSX security groups (with associated firewall rules) on-demand as part of a blueprint. That’s nice, and will be quite useful I think.

Speaking of NSX, Fathi jumps into a discussion of microsegmentation as a key value-add for VMware NSX in data centers. While microsegmentation was possible before VMware NSX, the value that NSX offers is that microsegmentation now becomes more manageable and more operationally feasible than it’s been in the past. Fathi reminds the audience that this functionality can be driven by policy.

Raghuram summarizes the various demonstrations and announcements, and leaves the stage. Fathi then invites Simone Brunozzi, VP and Chief Technologist for Hybrid Cloud. Brunozzi uses his Google Glass as an excuse to show off a mobile application he uses to “check” on his data center. This leads into a demo that shows vCenter Operations Manager to show the health of vCloud Air resources as well as VMware NSX resources—this shows the expanding reach of vCenter Operations Manager and the tight integration between the various product lines that VMware is advancing. This leads into a demonstration of extending a network from your own data center into vCloud Air, and then migrating workloads (via a copy operation) to vCloud Air directly from within the vSphere Web Client. That’s nothing new, according to Fathi, but what is new is the coordination between firewall rules in NSX and firewall rules in vCloud Air to enable full connectivity between on-premises workloads and workloads in vCloud Air.

Brunozzi next demos using vSphere Replication to replicate to a cloud provider for cloud-based recovery. There was no mention I caught of which providers support this, though I’m sure that vCloud Air via Recovery-as-a-Service does.

At this point I needed to leave to get ready for a session, so I wasn’t able to capture the rest of the keynote. I hope that what I was able to capture was useful—feel free to share any corrections or additions in the comments below. Thanks!

Tags: , , , ,

This is a liveblog of the day 1 keynote at VMworld 2014 in San Francisco, CA. (Well, it’s a “semi-liveblog”; I’ll post this immediately after the keynote session wraps up.) As in previous years, the keynote is being held in the massive space in Moscone North, and this year—like previous years—it looks like there is quite a crowd gathered here this morning. I have the privilege of sitting with newly-minted VCDX-NVs and other VCDXes.

The keynote starts with a catchy, jazzy dance/music piece, and then Robin Matlock, Chief Marketing Officer for VMware, takes the stage. Matlock shares that over 22,000 people are gathered here for VMworld, which (if I recall correctly) is down a little bit from last year. Matlock talks about change, and how change can be either a barrier or an opportunity. That leads into a discussion of the building of the Golden Gate Bridge as an allegory for pushing boundaries, exploring endless possibilities, and expanding your knowledge and expertise. This fits into the “No Limits” theme for the conference this week.

After discussing “Operation Giveback” (a charity fundraiser backed by the VMware Foundation), Matlock turns the stage over to VMware’s CEO, Pat Gelsinger, who comes onto the stage in a dramatic “breaking through” effect.

Gelsinger starts his portion of the keynote by introducing the “liquid world,” where change is paramount. Gelsinger provides some examples of various businesses that have embraced the “liquid business” model. The biggest risk to success, according to Gelsinger, is perpetuating the status quo. Change is inevitable. The best way to predict the future is to invent it, and Gelsinger says that’s what brave IT leaders need to do. This leads Gelsinger into a discussion of what the word “brave” means, and that in turn leads to a family-friendly discussion of the first day of school, a kid’s penalty shot in soccer, a driving test, or a public speaking event. Next up is a “who’s who” list of brave business leaders, like Elon Musk and the leader of 23andMe (I didn’t catch the spelling for her name). VMware is on a voyage of brave transformation itself, and Gelsinger provides examples of VMware’s BYOD efforts and running all of VMware’s SAP ERP efforts on vCloud Air.

Gelsinger brings the “fluid” and “brave” themes together in that brave IT involves fluidity and choice (and a third item I couldn’t catch). This leads into a discussion of the software-defined data center (SDDC). As a result of this discussion, Gelsinger goes into some announcements:

  • VMware is announcing VMware vCloud Suite 5.8, which includes new suite “fit and finish” changes
  • VMware is announcing vSphere 6.0 beta
  • VMware is announcing VSAN 2.0 beta
  • As part of the vSphere 6.0 beta, VMware is also announcing Virtual Volumes (VVols)
  • VMware is rebranding the management suite as VMware vRealize Suite

The path to SDDC has three branches: “build your own”, converged infrastructure, and hyper-converged infrastructure. This leads Gelsinger to announce VMware EVO, which is VMware’s family of hyper-converged infrastructure offerings. EVO is the SDDC packaged with hardware to provide the fastest way to realize the benefits of the SDDC. The first member of the EVO family is EVO:RAIL, which is a “building” block designed at around 100 VMs of capacity and is designed to be up and running in 15 minutes. Hardware partners for EVO:RAIL include Dell, EMC, Fujitsu, Inspur, Net One, and Supermicro.

The EVO Zone, found in the Solutions Exchange, has some examples of EVO products being brought to market.

The second member of the EVO family is EVO:RACK, which is being announced as a technical preview. EVO:RACK builds on EVO:RAIL for greater scale—basically building an “entire cloud” in 2 hours or less. There will be a range of certified hardware solutions available as EVO:RACK solutions. VMware is also joining the OpenCompute Project, and anticipates that EVO:RAIL and EVO:RACK will be available on OCP hardware.

Gelsinger also announces VMware Integrated OpenStack (VIO), VMware’s own OpenStack distribution that will incorporate key VMware technologies like vSphere, VSAN, and NSX. VIO is available in beta.

Continuing the discussion around next-generation platforms, Gelsinger announces partnerships with Docker, Google, and Pivotal around building open container APIs and making containers run better on VMware’s SDDC platform—”containers without compromise.”

That leads Gelsinger into a discussion of security, and that naturally involves VMware NSX. Microsegmentation is becoming a key use case for customers adopting VMware NSX.

After a brief discussion of end-user computing—admittedly brief because Sanjay Poonen will be discussing this in more detail in tomorrow’s keynote—Gelsinger moves on to a discussion of hybrid cloud. VMware vCloud Hybrid Service has been rebranded to vCloud Air, and all VMware products that are offered “as a service” will be rebranded as part of the Air family. Gelsinger brings Bill Fathers, EVP & GM for Hybrid Cloud at VMware, onto the stage.

Fathers recaps what’s happened with vCloud Air over the last 12 months; the service was officially launched last year in the day 1 keynote at VMworld 2013. vCloud Air has been successful in meeting all of the goals originally announced. Fathers reviews the key benefits of using public cloud, especially VMware’s public cloud: agility, efficiency, and cost savings. Fathers discusses the various phases of cloud adoption, and states that he believes the market is currently transitioning from the experimental phase to the professional era.

For companies that are being successful transitioning into the professional era—which involves much greater use of public cloud resources—the two things these companies are doing is 1) thinking of the apps first; and 2) putting the IT professionals back in charge. According to Fathers, vCloud Air provides the perfect platform for accomplishing both of these goals, due in no small part because vCloud Air uses the same hypervisor, same VM formats, and same management tools as companies are using today in their private clouds.

Fathers announces a government-focused FedRAMP-compliant vCloud Air offering launching soon (September); this will be very useful for US government-focused partners, service providers, and systems integrators. VMware is also expanding the number of vCloud Air partners in the vCloud Air Network, with 3900 partners available around the world in over 100 countries. vCloud Air Network is, according to Fathers, the world’s largest hybrid cloud network.

Next Fathers moves into a discussion of new services and offerings available with vCloud Air. These new services include continuous integration as a service offering (CIaaS?), MySQL and MS SQL database-as-a-service offering (will later expand to all popular RDBMS, and will offer DR services for on-premises databases), object storage service based on EMC ViPR, AirWatch and Pivotal CloudFoundry mobile services, and vRealize Air Automation (which is hosted cloud management tools available as a SaaS). Fathers also announcing a beta of on-demand, PAYG (pay as you go) access to vCloud Air.

Fathers transitions back to Gelsinger, who wraps up the themes and announcements from the keynote, and introduces Carl Eschenbach, President and COO of VMware.

Eschenbach reinforces that the “engine of innovation” is alive and well at VMware, but it is the brave who drive change (referring to the VMworld attendees in the audience). That leads into a customer discussion between Eschenbach with three key customers: MedTronic, MIT IST (Information Systems and Technology), and Ford. Each of the customers comes out on the stage and has a brief conversation with Eschenbach.

At this point I had to leave to prepare for a session, so I wasn’t able to capture any of the rest of the keynote.

Tags: , ,

In this post, I’ll show you how I got Arista’s vEOS software running under KVM to create a virtualized Arista switch. There are a number of other articles that help provide instructions on how to do this, but none of those that I found included the use of libvirt and/or Open vSwitch (OVS).

In order to run vEOS, you must first obtain a copy of vEOS. I can’t provide you with a copy; you’ll have to register on the Arista Networks site (see here) in order to gain access to the download. The download consists of two parts:

  1. The Aboot ISO, which contains the boot loader
  2. The vEOS disk image, provided as a VMware VMDK

Both of these are necessary; you can’t get away with just one or the other. Further, although the vEOS disk image is provided as a VMware VMDK, KVM/QEMU is perfectly capable of using the VMDK without any conversion required (this is kind of nice).

One you’ve downloaded these files, you can use the following libvirt domain XML definition to create a VM for running Arista vEOS (you’d use a command like virsh define <filename>).

(Click here if you can’t see the code block above.)

There are a few key things to note about this libvirt domain XML:

  • Note the boot order; the VM must boot from the Aboot ISO first.
  • Both the Aboot ISO as well as the vEOS VMDK are attached to the VM as devices, and you must use an IDE bus. Arista vEOS will refuse to boot if you use a SCSI device, so make sure there are no SCSI devices in the configuration. Pay particular attention to the type= parameters that specify the correct disk formats for the ISO (type “raw”) and VMDK (type “vmdk”).
  • For the network interfaces, you’ll want to be sure to use the e1000 model.
  • This example XML definition includes three different network interfaces. (More are supported; up to 7 interfaces on QEMU/KVM.)
  • This XML definition leverages libvirt integration with OVS so that libvirt automatically attaches VMs to OVS and correctly applies VLAN tagging and trunking configurations. In this case, the network interfaces are attaching to a portgroup called “trunked”; this portgroup trunks VLANs up to the guest domain (the vEOS VM, in this case). In theory, this should allow the vEOS VM to support VLAN trunk interfaces, although I had some issues making this work as expected and had to drop back to tagged interfaces.

Once you have the guest domain defined, you can start it by using virsh start <guest domain name>. The first time it boots, it will take a long time to come up. (A really long time—I watched it for a good 10 minutes before finally giving up and walking away to do something else. It was up when I came back.) According to the documentation I’ve found, this is because EOS needs to make a backup copy of the flash partition (which in this case is the VMDK disk image). It might be quicker for you, but be prepared for a long first boot just in case.

Once it’s up and running, use virsh vncdisplay to get the VNC display of the vEOS guest domain, then use a VNC viewer to connect to the guest domain’s console. You won’t be able to SSH in yet, as all the network interfaces are still unconfigured. At the console, set an IP address on the Management1 interface (which will correspond to the first virtual network interface defined in the libvirt domain XML) and then you should have network connectivity to the switch for the purposes of management. Once you create a username and a password, then you’ll be able to SSH into your newly-running Arista vEOS switch. Have fun!

For additional information and context, here are some links to other articles I found on this topic while doing some research:

If you have any questions or need more information, feel free to speak up in the comments below. All courteous comments are welcome!

Tags: , , , , ,

It’s that time again—time for community voting on sessions for the fall OpenStack Summit, being held in Paris this year in early November. I wanted to take a moment and share some of the sessions in which I’m involved and/or that I think might be useful. It would be great if you could take a moment to add your votes for the sessions.

My Sessions

I have a total of four session proposals submitted this year:

Congress Sessions

You may also be aware that I am involved with a project called Congress, which aims to bring an overarching policy service to OpenStack. Here are some sessions pertaining to Congress:

VMware Sessions

Arvind Soni, one of the product managers for OpenStack at VMware, kindly pulled together this list of VMware-related sessions, so feel free to have a look at any of these and vote on what sounds appealing to you.

Other Sessions

There are way too many sessions to list all the interesting ones, but here are a few that caught my eye:

There are a bunch more that looked interesting to me, but I’ll skip listing them all here—just hop over to the OpenStack site and vote for the sessions you want to see.

Tags: , , , , ,

This issue describes a fix I found for an issue I had when booting KVM guest domains on the Ubuntu/KVM hypervisors in my home lab. I’d been struggling with this issue for quite some time now, but only recently found what I believe to be the final fix for the problem.

First, allow me to provide a bit of background. Some time ago—I’d say around August 2012, when I left the vSpecialist team at EMC to join an OpenStack-focused team in another part of EMC—I moved my home lab over completely to Ubuntu 12.04 LTS with the KVM hypervisor. This was an important step in educating myself on Linux, KVM, libvirt, and Open vSwitch (OVS), all of which are critical core components in most installations of OpenStack.

Ever since making that change—particularly after adding some new hardware, a pair of Dell C6100 servers, to my home lab—I would experience intermittent problems booting a KVM guest. The guest would appear to boot properly, but then hang shortly after a message about activating swap space and fsck reporting that the file system was clean. Sometimes, rebooting the guest would work; many times, rebooting the guest didn’t work. Re-installing the guest sometimes worked, but sometimes it didn’t. There didn’t appear to be any consistency with regard to the host (the issue occurred on all hosts) or guest configuration. The only consistency appeared to be with Ubuntu, as virtually (no pun intended) all my KVM guests were running Ubuntu.

Needless to say, this was quite frustrating. I tried all the troubleshooting I could imagine—deleting and recreating swap space, manually checking the file system(s), various different installation routines—and nothing seemed to make any difference.

Finally, just in the last few weeks, I stumbled across this page, which indicated that adding “nomodeset” to the grub command line fixed the problem. This was a standard part of my build (it kept the console from getting too large when using VNC to connect to the guest), but it required that I was able to successfully boot the VM first. I’d noted that once I had been able to successfully boot a guest and add “nomodeset” to the grub configuration, I didn’t have any further issues with that particular guest; however, I explained that away by saying that the intermittent boot issue must have been some sort of first-time boot issue.

In any case, that page linked to this ServerFault entry, which also indicated that the use of “nomodeset” helped fix some (seemingly) random boot problems. The symptoms described there—recovery mode worked fine, booting normally after booting into recovery mode resulted in an “initctl: event failed” error—were consistent with what I’d been seeing as well.

So, I took one of the VMs that was experiencing this problem, booted it into recovery mode, edited the /etc/default/grub file to include “nomodeset” on the GRUB_CMDLINE_LINUX_DEFAULT line, and rebooted. The KVM guest booted without any issues. Problem fixed (apparently).

Thus far, this has fixed the intermittent boot issue on every KVM guest I’ve tried, so I’m relatively comfortable recommending it as a potential change you should explore if you experience the same problem/symptoms. I can’t guarantee it will work, but it has worked for me so far.

Good luck!

Tags: , ,

« Older entries