Virtualization

You are currently browsing articles tagged Virtualization.

As most of you probably know, I visit quite a few VMUG User Conferences around the United States and around the world. I’d probably do even more if my calendar allowed, because it’s truly an honor for me to have the opportunity to help educate the VMware user community. I know I’m not alone in that regard; there are numerous VMware “rock stars” (not that I consider myself a “rock star”) out there who also work tirelessly to support the VMware community. One need not look very far to see some examples of these types of individuals: Mike Laverick, William Lam, Duncan Epping, Josh Atwell, Nick Weaver, Alan Renouf, Chris Colotti, Cody Bunch, or Cormac Hogan are all great examples. (And I’m sure there are many, many more I’ve forgotten!)

However, one thing that has consistently been a topic of discussion among those of us who frequent VMUGs has been this question: “How do we get users more engaged in VMUG?” VMUG is, after all, the VMware User Group. And while all of us are more than happy to help support VMUG (at least, I know I am), we’d also like to see more user engagement—more customers speaking about their use cases, their challenges, the things they’ve learned, and the things they want to learn. We want to see users get connected with other users, to share information and build a community-based body of knowledge. So how can we do that?

As I see it, there is a variety of reasons why users don’t volunteer to speak:

  • They might be afraid of public speaking, or aren’t sure how good they’ll be.
  • They feel like the information they could share won’t be helpful or useful to others.
  • They aren’t sure how to structure their presentation to make it informative yet engaging.

We (meaning a group of us that support a lot of these events) have tossed around a few ideas, but nothing has ever really materialized. Today I hope to change all that. Today, I’m announcing that I will personally help mentor 5 different VMware users who are willing to step up and volunteer to speak for the first time at a local VMUG meeting in the near future.

So what does this mean?

  • I will help you select a topic on which to speak (in coordination with your local VMUG leader).
  • I will provide guidance and feedback on gathering your content.
  • I will review and provide feedback and suggestions for improving your presentation.
  • If desired, I will provide tips and tricks for public speaking.

And I’m calling on others within the VMUG community who are frequent speakers to do the same. I think that Mike Laverick might have already done something like this; perhaps the others have as well. If so, that’s awesome. If not, I challenge you, as someone viewed in a technical leadership role within the VMware and VMUG communities, to use that leadership role in a way that I hope will reinvigorate and renew user involvement and participation in the VMware/VMUG community.

If you’re one of the 5 people who’s willing to take me up on this offer, the first step is contact me and your local VMUG leader and express your interest. Don’t have my e-mail address? Here’s your first challenge: it’s somewhere on this site.

If you’re already a frequent speaker at VMUGs and you, too, want to help mentor other speakers, you can either post a comment here to that effect (and provide people with a way of getting in touch with you), or—if you have your own blog—I encourage you to make the same offer via your own site. Where possible, I’ll try to update this (or you can use trackbacks) so that readers have a good idea of who out there is willing to provide assistance to help them become the next VMUG “rock star” presenter.

Good luck, and I look forward to hearing from you!

UPDATE: A few folks have noted that all the names I listed above are VMware employees, so I’ve added a couple others who are not. Don’t read too much into that; it was all VMware employees because I work at VMware, too, and they’re the ones I communicate with frequently. There are lots of passionate and dedicated VMUG supporters out there—you know who you are!

Also, be sure to check the comments; a number of folks are volunteering to also mentor new speakers.

Tags: , ,

This is session EDCS008, “Virtualizing the Network to Enable a Software-Defined Infrastructure (SDI).” The speakers are Brian Johnson (@thehevy on Twitter) from Intel and Jim Pinkerton from Microsoft. Brian is a Solution Architect; Jim is a Windows Server Architect. If you’ve ever been in one of Brian’s presentations, you know he does a great job of really diving deep in some of this stuff. (Can you tell I’m a fan?)

Brian starts the session with a review of how the data center has evolved over the last 10 years or so, driven by the widespread adoption of compute virtualization, increased CPU capacity, and the adoption of 10Gb Ethernet. This naturally leads to a discussion of software-defined networking (SDN) as a means whereby the network can evolve to keep up the rapid pace of change and innovation in other areas of the data center. Why is this a big deal? Brian draws the comparison between property management and how IT is shaping:

  • A rental house is pretty easy to manage. One tenant, infrequent change, long-term investments.
  • An apartment means more tenants, but still relatively infrequent change.
  • A hotel means lots of tenants and the ability to handle frequent change and lots of room turnover.

The connection here is VMs—we’re now running lots of VMs, and the VMs change regularly. The infrastructure needs to be ready to handle this rapid pace of change.

At this point, Jim Pinkerton of Microsoft takes over to discuss how Windows Server thinks about this issue and these challenges. According to Jim, the world has moved beyond virtualization—it now needs the ability to scale and secure many workloads cost-effectively. You need greater automation, and you need to support any type of application. Jim talks about private clouds, hosting (IaaS-type services), and public clouds. He points out that MTTR (Mean Time to Repair) is a more important metric than MTBF (Mean Time Between Failures).

Driven by how the data center is evolving (the points in the previous paragraph), the network needs to be evolved:

  • Deliver networking as part of a pooled, automated infrastructure
  • Ensure multitenant isolation, scale, and performance
  • Expand data center capacity seamlessly as per business needs
  • Reduce operational complexity

Out of these design principles comes SDN, according to Pinkerton. Key attributes of SDN, according to Microsoft, are flexibility, control, and automation. At this point Pinkerton digresses into a discussion of SMB3 and its performance characteristics over 10Gb Ethernet—which, frankly, is completely unrelated to the topic of the presentation. After a few slides of discussing SMB3 with very little relevance to the rest of the discussion, Pinkerton moves back into a discussion of the virtual switch found in Windows Server 2012 R2.

Brian now takes over again, focusing on virtual switch performance and behavior. East-west traffic between VMs can hit 60–70Gbps, because it all happens inside the server. How do we maintain that traffic performance when we see east-west traffic between servers? We can deploy more interfaces, which is commonly seen. Moving to 10Gb Ethernet is another solution. Intel needed to add features to their network controllers—features like stateless offloads, virtual machine queues, and SR-IOV support—in order to drive performance for multiple 10Gb Ethernet interfaces. SR-IOV can help address some performance and utilization concerns, but this presents a problem when working with network virtualization. If you’re bypassing the hypervisor, how do you get on the virtual network?

Brian leaves this question for now to talk about how network virtualization with overlays helps address some of the network provisioning concerns that exist today. He provides an example of how using overlays—he uses NVGRE, since this is a joint presentation with Microsoft—can allow tenants (customers, internal business units, etc.) to share private address spaces and eliminate many manual VLAN configuration tasks. He makes the point that network virtualization is possible without SDN, but SDN makes it much easier and simpler to manage and implement network virtualization.

One drawback of overlays is that many network interface cards (NICs) today don’t “understand” the overlays, and therefore can’t perform certain hardware offloads that help optimize traffic and utilization. However, Brian shows a next-gen Intel NIC that will understand network overlays and will be able to perform offloads (like LSO, RSS, and VMQ) on encapsulated traffic.

This leads Brian to a discussion of Intel Open Network Platform (ONP), which encompasses two aspects:

  1. Intel ONP Switch reference design (aka “Seacliff Trail”), which leverages Intel silicon to support SDN and network Virtualization
  2. Intel ONP Server reference design, which shows how to optimize virtual switching using Intel’s Data Plane Development Kit (DPDK)

The Intel ONP Server reference design (sorry, can’t remember the code name) actually uses Open vSwitch (OVS) as a core part of its design.

Intel ONP includes something called FlexPipe (this is part of the Intel FM6700 chipset) to enable faster innovation and quicker support for encapsulation protocols (like NVGRE, VXLAN, and whatever might come next). The Intel ONP Switch supports serving as a bridge to connect physical workloads into virtual networks that are encapsulated, and being able to do this at full line rate using 40Gbps uplinks.

At this point, Brian and Jim wrap up the session and open up for questions and answers.

Tags: , , , ,

This is a liveblog of Intel Developer Forum (IDF) 2013 session EDCS003, titled “Enhancing OpenStack with Intel Technologies for Public, Private, and Hybrid Cloud.” The presenters are Girish Gopal and Malini Bhandaru, both with Intel.

Gopal starts off by showing the agenda, which will provide an overview of Intel and OpenStack, and then dive into some specific integrations in the various OpenStack projects. The session will wrap up with a discussion of Intel’s Open IT Cloud, which is based on OpenStack. Intel is a Gold Member of the OpenStack Foundation, has made contributions to a variety of OpenStack projects (tools, features, fixes and optimizations), has built its own OpenStack-based private cloud, and is providing additional information and support via the Intel Cloud Builders program.

Ms. Bhandaru takes over to provide an overview of the OpenStack architecture. (Not surprisingly, they use the diagram prepared by Ken Pepple.) She tells attendees that Intel has contributed bits and pieces to many of the various OpenStack projects. Next, she dives a bit deeper into some OpenStack Compute-specific contributions.

The first contribution she mentions is Trusted Compute Pools (TCP), which was enabled in the Folsom release. TCP relies upon the Trusted Platform Module (TPM), which in turn builds on Intel TXT and Trusted Boot. Together with the Open Attestation (OAT) SDK (available from https://github.com/OpenAttestation/OpenAttestation), Intel has contributed a “Trust Filter” for OpenStack Compute as well as a “Trust Filter UI” for OpenStack Dashboard. These components allow for hypervisor/compute node attestation to ensure that the underlying compute nodes have not been compromised. Users can then request that their instances are scheduled onto trusted nodes.

Intel has also done work on TCP plus Geo-Tagging. This builds on TCP to enforce policies about where instances are allowed to run. This includes a geo attestation service and Dashboard extensions to support that functionality. This work has not yet been done, but is found in current OpenStack blueprints.

In addition to trust, Intel has done work on security with OpenStack. Intel’s work focuses primarily around key management. Through collaboration with Rackspace, Mirantis, and some others, Intel has proposed a new key management service for OpenStack. This new service would rely upon good random number generation (which Intel strengthened in the Xeon E5 v2 release announced earlier today), secure storage (to encrypt the keys), careful integration with OpenStack Identity (Keystone) for authentication and access policies, extensive logging and auditing, high availability, and a pluggable-backend (similar to Cinder/Neutron). This would allow encryption of Swift objects, Glance images, and Cinder volumes. The key manager project is called Barbican (https://github.com/cloudkeep/barbican) and provides integration with OpenStack Identity. In the future, they are looking at creation and certification of private-public pairs, software support for periodic background tasks, KMIP support, and potential AES-XTS support for enhanced performance. This will also leverage Intel’s AES-NI support in newer CPUs/chipsets.

Intel also helped update the OpenStack Security Guide (http://docs.openstack.org/sec/).

Next, Intel talks about how they have worked to expose hardware features into OpenStack. This would allow for greater flexibility with the Nova scheduler. This involves work in libvirt as well as OpenStack, so that OpenStack can be aware of CPU functionality (which, in turn, might allow cloud providers to charge extra for “premium images” that offer encryption support in hardware). The same goes for exposing PCI Express (PCIe) Accelerator support into OpenStack as well.

Gopal now takes over and moves the discussion into storage in OpenStack. With regard to block storage via Cinder, Intel has incorporated support to filter volumes based on availability zone, capabilities, capacity, and other features so that volumes are allocated more intelligently based on workload and type of service required. By granting greater intelligence to how volumes are allocated, cloud service providers can offer differentiated (read: premium priced) services for block storage. This work is enabled in the Grizzly release.

In addition to block storage, many OpenStack environments also leverage Swift for object storage. Intel is focused on enabling erasure coding to Swift, which would enable reduced storage requirements in Swift deployments. Initially, erasure coding will be used for “cold” objects (objects that aren’t accessed or updated frequently); this helps preserve the service level for “hot” objects. Erasure coding would replace triple replication to reduce storage requirements in the Swift capacity tier. (Note that this something I also discussed with SwiftStack a couple weeks ago during VMworld.)

Intel has also developed something called COSBench, which is an open source tool that can be used to measure cloud object storage performance. COSBench is available at https://github.com/intel-cloud/cosbench.

At this point, Gopal transitions to networking in OpenStack. This discussion focuses primarily around Intel Open Network Platform (ONP). There’s another session that will go deeper on this topic; I expect to attend that session and liveblog it as well.

The networking discussion is very brief; perhaps because there is a dedicated session for that topic. Next up is Intel’s work with OpenStack Data Collection (Ceilometer), which includes work to facilitate the transformation and collection of data from multiple publishers. In addition, Intel is looking at enhanced usage statistics to affect compute scheduling decisions (essentially this is utilization-based scheduling).

Finally, Gopal turns to a discussion of Intel IT Open Cloud, which is a private cloud within Intel. Intel is now at 77% virtualized, with 80% of all new servers being deployed in the cloud. It’s less than an hour to deploy instances. Intel estimates a savings of approximately $21 million so far. Where is Intel IT Open Cloud headed? Intel IT is looking at using all open source software for Intel IT Open Cloud (this implies that it is not built with open source software today). There is another session on Intel IT Open Cloud tomorrow that I will try to attend.

At this point, Gopal summarizes all of the various Intel contributions to OpenStack (I took a picture of this I posted via Twitter) and ends the session.

Tags: , , , , , ,

It’s been a little while since I talked about libvirt directly, but in this post I’d like to go back to libvirt to talk about how you can edit a guest domain’s libvirt XML definition to adjust access to the VNC console for that guest domain.

This is something of a follow-up to my post on using SSH to access the VNC consoles of KVM guests, in which I showed how to use SSH tunneling (including multi-hop tunnels) to access the VNC console of a KVM guest domain. That post was necessary because, by default, libvirt binds the VNC server for a guest domain to the KVM host’s loopback address. Thus, the only way to access the VNC console of a KVM guest domain is via SSH, and then tunneling VNC traffic to the host’s loopback address.

However, it’s possible to change this behavior. Consider the following snippet of libvirt XML code; this is responsible for setting up the VNC access to the guest domain’s console:

<graphics type='vnc' port='-1' autoport='yes'/>

This is the default configuration; it binds a VNC server for this guest domain to the host’s loopback address and auto-allocates a port for VNC (the first guest gets screen 0/port 5900, the second guest gets screen 1/port 5901, etc.). With this configuration, you must use SSH and tunneling in order to gain access to the guest domain’s VNC console.

Now consider this libvirt XML configuration:

<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'/>

With this configuration in place, the VNC console for that guest domain will be available on any interface on the host, but the port will still be auto-allocated (first guest domain powered on gets port 5900, second guest domain powered on gets port 5901, etc.). With this configuration, anyone with a VNC viewer on your local network can gain access to the console of the guest domain. As you might expect, you could change the ‘0.0.0.0' to a specific IP address assigned to the KVM host, and you could limit access to the VNC port via IPTables (if you so desired).

If you so desired, you can password-protect the VNC console for the guest domain using this snippet of libvirt XML configuration:

<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='protectme'/>

Now, the user attempting to access the guest domain’s VNC console must know the password specified by the passwd parameter in the configuration. Otherwise, this configuration is the same as the previous configuration, and can be limited/protected in a variety of ways (limited to specific interfaces and/or controlled via IPTables).

For more details, you can refer to the full reference on the libvirt guest domain XML configuration.

Tags: , , , ,

I’m back with more NVP goodness; this time, I’ll be walking you through the process of creating a logical network and attaching VMs to that logical network. This work builds on the stuff that has come before it in this series:

  • In part 1, I introduced you to the high-level architecture of NVP.
  • In part 2, I walked you through setting up a cluster of NVP controllers.
  • In part 3, I showed you how to install and configure NVP Manager.
  • In part 4, I discussed how to add hypervisors (KVM hosts, in this case) to your NVP environment.

Just a quick reminder in case you’ve forgotten: although VMware recently introduced VMware NSX at VMworld 2013, the architecture of NSX when used in a multi-hypervisor environment is very similar to what you can see today in NVP. (In pure vSphere environments, the NSX architecture is a bit different.) As a result, time spent with NVP now will pay off later when NSX becomes available. Might as well be a bit proactive, right?

At the end of part 4, I mentioned that I was going to revisit a few concepts before proceeding to the logical network piece, but after deeper consideration I’ve decided to proceed with creating a logical network. I still believe there will be a time when I need to stop and revisit some concepts, but it didn’t feel right just yet. Soon, I think.

Before I get into the details on how to create a logical network and attach VMs, I want to first talk about my assumptions regarding your environment.

Assumptions

This walk-through assumes that you have an NVP controller cluster up and running, an instance of NVP Manager connected to that cluster, at least 2 hypervisors installed and added to NVP, and at least 1 VM running on each hypervisor. I further assume that your environment is using KVM and libvirt.

Pursuant to these assumptions, my environment is running KVM on Ubuntu 12.04.2, with libvirt 1.0.2 installed from the Ubuntu Cloud Archive. I have the NVP controller cluster up and running, and an instance of NVP Manager connected to that cluster. I also have an NVP Gateway and an NVP Service Node, two additional components that I haven’t yet discussed. I’ll cover them in a near-future post.

Additionally, to make it easier for myself, I’ve created a libvirt network for the Open vSwitch (OVS) integration bridge, as outlined here (and an update here). This allows me to simply point virsh at the libvirt network, and the guest domain will attach itself to the integration bridge.

Revisiting Transport Zones

I showed you how to create a transport zone in part 4; it was necessary to have a transport zone present in order to add a hypervisor to NVP. But what is a transport zone? I didn’t explain it there, so let me do that now.

NVP uses the idea of transport zones to provide connectivity models based on the topology of the underlying network. For example, you might have hypervisors that connect to one network for management traffic, but use an entirely different network for VM traffic. The combination of a transport zone plus the transport connectors tells NVP how to form tunnels between hypervisors for the purposes of providing logical connectivity.

For example, consider this graphic:

The transport zones (TZ–01 and TZ–02) help NVP understand which interfaces on the hypervisors can communicate with which other interfaces on other hypervisors for the purposes of establishing overlay tunnels. These separate transport zones could be different trust zones, or just reflect the realities of connectivity via the underlying physical network.

Now that I’ve explained transport zones in a bit more detail, hopefully their role in adding hypervisors makes a bit more sense now. You’ll also need a transport zone already created in order to create a logical switch, which is what I’ll show you next.

Creating the Logical Switch

Before I get started taking you through this process, I’d like to point out that this process is going to seem laborious. When you’re operating outside of a CMP such as CloudStack or OpenStack, using NVP will require you to do things manually that you might not have expected. So, keep in mind that NVP was designed to be integrated into a CMP, and what you’re seeing here is what it looks like without a CMP. Cool?

The first step is creating the logical switch. To do that, you’ll log into NVP Manager, which will dump you (by default) into the Dashboard. From there, in the Summary of Logical Components section, you’ll click the Add button to add a switch. To create a logical switch, there are four sections in the NVP Manager UI where you’ll need to supply various pieces of information:

  1. First, you’ll need to provide a display name for the new logical switch. Optionally, you can also specify any tags you’d like to assign to the new logical switch.
  2. Next, you’ll need to decide whether to use port isolation (sort of like PVLANs; I’ll come back to these later) and how you want to handle packet replication (for BUM traffic). For now, leave port isolation unchecked and (since I haven’t shown you how to set up a service node) leave packet replication set to Source Nodes.
  3. Third, you’ll need to select the transport zone to which this logical switch should be bound. As I described earlier, transport zones (along with connectors) help define connectivity between various NVP components.
  4. Finally, you’ll select the logical router, if any, to which this switch should be connected. We won’t be using a logical router here, so just leave that blank.

Once the logical switch is created, the next step is to add logical switch ports.

Adding Logical Switch Ports

Naturally, in order to connect to a logical switch, you need logical switch ports. You’ll add a logical switch port for each VM that needs to be connected to the logical switch.

To add a logical switch port, you’ll just click the Add button on the line for Switch Ports in the Summary of Logical Components section of the NVP Manager Dashboard. To create a logical switch port, you’ll need to provide the following information:

  1. You’ll need to select the logical switch to which this port will be added. The drop-down list will show all the logical switches; once one is selected that switch’s UUID will automatically populate.
  2. The switch port needs a display name, and (optionally) one or more tags.
  3. In the Properties section, you can select a port number (leave blank for the next port), whether the port is administratively enabled, and whether or not there is a logical queue you’d like to assign (queues are used for QoS; leave it blank for no queue/no QoS).
  4. If you want to mirror traffic from one port to another, the Mirror Ports section is where you’ll configure that. Otherwise, just leave it all blank.
  5. The Attachment section is where you “plug” something into this logical switch port. I’ll come back to this—for now, just leave it blank.
  6. Under Port Security you can specify what address pairs are allowed to communicate with this port.
  7. Finally, under Security Profiles, you can attach an existing security profile to this logical port. Security profiles allow you to create ingress/egress access-control lists (ACLs) that are applied to logical switch ports.

In many cases, all you’ll need is the logical switch name, the display name for this logical switch port, and the attachment information. Speaking of attachment information, let’s take a closer look at attachments.

Editing Logical Switch Port Attachment

As I mentioned earlier, the attachment configuration is what “plugs” something into the logical switch. NVP logical switch ports support 6 different types of attachment:

  • None is exactly that—nothing. No attachment means an empty logical port.
  • VIF is used for connecting VMs to the logical switch.
  • Extended Network Bridge is a deprecated option for an older method of bridging logical and physical space. This has been replaced by L2 Gateway (below) and should not be used. (It will likely be removed in future NVP releases.)
  • Multi-Domain Interconnect (MDI) is used in specific configurations where you are federating multiple NVP domains.
  • L2 Gateway is used for connecting an L2 gateway service to the logical switch (this allows you to bring physical network space into logical network space). This is one I’ll discuss later when I talk about L2 gateways.
  • Patch is used to connect a logical switch to a logical router. I’ll discuss this in greater detail when I get around to talking about logical routing.

For now, I’m just going to focus on attaching VMs to the logical switch port, so you’ll only need to worry about the VIF attachment type. However, before we can attach a VM to the logical switch, you’ll first need a VM powered on and attached to the integration bridge. (Hint: If you’re using KVM, use virsh start <VM name> to start the VM. Or just read this.)

Once you have a VM powered on, you’ll need to be sure you know the specific OVS port on that hypervisor to which the VM is attached. To do that, you would use ovs-vsctl show to get a list of the VM ports (typically designated as “vnet_X_”), and then use ovs-vsctl list port vnetX to get specific details about that port. Here’s the output you might get from that command:

In particular, note the external_ids row, where it stores the MAC address of the attached VM. You can use this to ensure you know which VM is mapped to which OVS port.

Once you have the mapping information, you can go back to NVP Manager, select Network Components > Logical Switch Ports from the menu, and then highlight the empty logical switch port you’d like to edit. There is a gear icon at the far right of the row; click that and select Edit. Then click “4. Attachment” to edit the attachment type for that particular logical switch port. From there, it’s pretty straightforward:

  1. Select “VIF” from the Attachment Type drop-down.
  2. Select your specific hypervisor (must already be attached to NVP per part 4) from the Hypervisor drop-down.
  3. Select the OVS port (which you verified just a moment ago) using the VIF drop-down.

Click Save, and that’s it—your VM is now attached to an NVP logical network! A single VM attached to a logical network all by itself is no fun, so repeat this process (start up VM if not already running, verify OVS port, create logical switch port [if needed], edit attachment) to attach a few other VMs to the same logical network. Just for fun, be sure that at least one of the other VMs is on a different hypervisor—this will ensure that you have an overlay tunnel created between the hypervisors. That’s something I’ll be discussing in a near-future post (possibly part 6, maybe part 7).

Once your VMs are attached to the logical network, assign IP addresses to them (there’s no DCHP in your logical network, unless you installed a DHCP server on one of your VMs) and test connectivity. If everything went as expected, you should be able to ping VMs, SSH from one to another, etc., all within the confines of the new NVP logical network you just created.

There’s so much more to show you yet, but I’ll wrap this up here—this post is already way too long. Feel free to post any questions, corrections, or clarifications in the comments below. Courteous comments (with vendor disclosure, where applicable) are always welcome!

Tags: , , , ,

Welcome to Technology Short Take #35, another in my irregular series of posts that collect various articles, links and thoughts regarding data center technologies. I hope that something in here is useful to you.

Networking

  • Art Fewell takes a deeper look at the increasingly important role of the virtual switch.
  • A discussion of “statefulness” brought me again to Ivan’s post on the spectrum of firewall statefulness. It’s so easy sometimes just to revert to “it’s stateful” or “it’s not stateful,” but the reality is that it’s not quite so black-and-white.
  • Speaking of state, I like this piece by Ivan as well.
  • I tend not to link to TechTarget posts any more than I have to, because invariably the articles end up going behind a login requirement just to read them. Even so, this Q&A session with Martin Casado on managing physical and virtual worlds in parallel might be worth going through the hassle.
  • This looks interesting.
  • VMware introduced VMware NSX recently at VMworld 2013. Cisco shared some thoughts on what they termed a “software-only” approach; naturally, they have a different vision for data center networking (and that’s OK). I was a bit surprised by some of the responses to Cisco’s piece (see here and here). In the end, though, I like Greg Ferro’s statement: “It is perfectly reasonable that both companies will ‘win’.” There’s room for a myriad of views on how to solve today’s networking challenges, and each approach has its advantages and disadvantages.

Servers/Hardware

Nothing this time around, but I’ll watch for items to include in future editions. Feel free to send me links you think would be useful to include in the future!

Security

  • I found this write-up on using OVS port mirroring with Security Onion for intrusion detection and network security monitoring.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • In past presentations I’ve referenced the terms “snowflake servers” and “phoenix servers,” which I borrowed from Martin Fowler. (I don’t know if Martin coined the terms or not, but you can get more information here and here.) Recently among some of Martin’s material I saw reference to yet another term: the immutable server. It’s an interesting construct: rather than managing the configuration of servers, you simply spin up new instances when you need a new configuration; existing configurations are never changed. More information on the use of the immutable server construct is also available here. I’d be interested to hear readers’ thoughts on this idea.

Storage

  • Chris Evans takes a took at ScaleIO, recently acquired by EMC, and speculates on where ScaleIO fits into the EMC family of products relative to the evolution of storage in the data center.
  • While I was at VMworld 2013, I had the opportunity to talk with SanDisk’s FlashSoft division about their flash caching product. It was quite an interesting discussion, so stay tuned for that update (it’s almost written; expect it in the next couple of days).

Virtualization

  • The rise of new converged (or, as some vendors like to call it, “hyperconverged”) architectures means that we have to consider the impact of these new architectures when designing vSphere environments that will leverage them. I found a few articles by fellow VCDX Josh Odgers that discuss the impact of Nutanix’s converged architecture on vSphere designs. If you’re considering the use of Nutanix, have a look at some of these articles (see here, here, and here).
  • Jonathan Medd shows how to clone a VM from a snapshot using PowerCLI. Also be sure to check out this post on the vSphere CloneVM API, which Jonathan references in his own article.
  • Andre Leibovici shares an unofficial way to disable the use of the SESparse disk format and revert to VMFS Sparse.
  • Forgot the root password to your ESXi 5.x host? Here’s a procedure for resetting the root password for ESXi 5.x that involves booting on a Linux CD. As is pointed out in the comments, it might actually be easier to rebuild the host.
  • vSphere 5.5 was all the rage at VMworld 2013, and there was a lot of coverage. One thing that I didn’t see much discussion around was what’s going on with the free version of ESXi. Vladan Seget gives a nice update on how free ESXi is changing with version 5.5.
  • I am loving the micro-infrastructure series by my VMware vSphere Design co-author, Forbes Guthrie. See it here, here, and here.

It’s time to wrap up now; I’ve already included more links than I normally include (although it doesn’t seem like it). In any case, I hope that something I’ve shared here is helpful, and feel free to share your own thoughts, ideas, and feedback in the comments below. Have a great day!

Tags: , , , , , , , , ,

Following on the heels of this week’s VMware NSX announcement at VMworld 2013, I had someone contact me about a statement this person had heard. The statement claimed NSX was nothing more than a collection of tools, and that it was possible to get the equivalent of NSX using completely free (as in speech and as in beer) open source tools—specifically, IPTables, StrongSwan, OpenVPN, and Open vSwitch (OVS). Basically, the statement was that it was possible to create do-it-yourself network virtualization.

That got me thinking: is it true? After considering it for a while, I’d say it’s probably possible; I’m not enough of an expert with these specific tools to say it can’t be done. I will say that it would likely be difficult, beyond the reach of most organizations and individuals, and would still suffer from a number of operational drawbacks. Here’s why.

What are the core components of a network virtualization solution? In my view, there are at least three core components any network virtualization solution needs:

  1. Logically centralized knowledge of the network topology; this component should provide programmatic access to this information
  2. Programmable edge virtual switches in the hypervisor
  3. An encapsulation protocol to isolate logical networks from the physical network

Let’s compare this list with the DIY solution:

  1. I don’t see any component that is capable of building and/or maintaining knowledge of the network topology, and certainly no way to programmatically access this information. This has some pretty serious implications, which I’ll describe below.
  2. OVS fills the need for a programmable edge virtual switch quite nicely, considering that it was expressly designed for this purpose (and is itself leveraged by NSX).
  3. You could potentially leverage either StrongSwan or OpenVPN as an encapsulation protocol. Both of these solutions use encryption, so you’d have to accept the computational overhead of using encryption within your data center for hypervisor-to-hypervisor connectivity, but OK—I suppose these count. Neither of these solutions provide any sort of way to distinguish or mark traffic inside the tunnel, which also has some implications we need to explore.

OK, so the DIY solution is missing a couple of key components. What implications does this have?

  • Without any centralized knowledge of the network topology, there is nothing to handle programming OVS. Therefore, every single change must be manually performed. Provisioning a new VM? You must manually configure OVS, OpenVPN, StrongSwan, and possibly IPTables to handle connectivity to and from that VM. Moving a VM from one host to another? That requires manual reconfiguration. Live migration? That will require manual reconfiguration. Terminating a VM? That will require manual reconfiguration.
  • Without programmatic access to the solution, it can’t be integrated into any other sort of management framework. Want to use it with OpenStack? CloudStack? It’s probably not going to work. Want to use it with a custom CMP you’ve built? It might work, but only after extensive integration work (and a lot of scripts).
  • It’s my understanding that both StrongSwan and OpenVPN will leverage standard IP routing technologies to direct traffic appropriately through the tunnels. What happens when you have multiple logical networks with overlapping IP address space? How will StrongSwan and/or OpenVPN respond? Because neither StrongSwan nor OpenVPN have any way of identifying or marking traffic inside the tunnel (think of VXLAN’s 24-bit VNID or STT’s 64-bit Context ID), how do we distinguish one tenant’s traffic from another tenant’s traffic? Can we even support multi-tenancy? Do we have to fall back to using VLANs?
  • Do you really want to incur the computational overhead of using encryption for all intra-DC traffic?

Of course, this list doesn’t even begin to address other operational concerns: multiple hypervisor support, support for multiple operating systems (or even multiple Linux distributions), support for physical workloads, physical top-of-rack (ToR) switch integration, high availability for various components, and the supportability of the overall solution.

As you can see, there are some pretty significant operational concerns there. Manual reconfiguration for just about any VM-related task? That doesn’t sound like a very good approach. Sure, it might be technically feasible to build your own network virtualization solution, but what benefits does it bring to the business?

Granted, I’m not an expert with some of the open source tools mentioned, so I could be wrong. If you are an expert with one of these tools, and I have misrepresented the functionality the tool is capable of providing, please speak up in the comments below. Likewise, if you feel I’m incorrect in any of my conclusions, I’d love to hear your feedback. Courteous comments are always welcome!

Tags: , , , ,

This is a liveblog of the day 2 keynote at VMworld 2013 in San Francisco. For a look at what happened in yesterday’s keynote, see here. Depending on network connectivity, I may or may not be able to update this post in real-time.

The keynote kicks off with Carl Eschenbach. Supposedly there are more than 22,000 people in attendance at VMworld 2013, making it—according to Carl—the largest IT infrastructure event. (I think some other vendors might take issue with that claim.) Carl recaps the events of yesterday’s keynote, revisiting the announcements around vSphere 5.5, VMware NSX, VMware VSAN, VMware Hybrid Cloud Service, and the expansion of the availability of Cloud Foundry. “This is the power of software”, according to Carl. Carl also revisits the three “imperatives” that Pat shared yesterday:

  1. Extending virtualization to all of it.
  2. IT management giving way to automation.
  3. Making hybrid cloud ubiquitous.

Carl brings out Kit Colbert, a principal engineer at VMware (and someone who relatively well-recognized within the virtualization community). They show a clip from a classic “I Love Lucy” episode that is intended to help illustrate the disconnect between the line of business and IT. After a bit of back and forth about the needs of the line of business versus the needs of IT, Kit moves into a demo of vCloud Automation Center (vCAC). The demo shows how to deploy applications to a variety of different infrastructures, including the ability to look at costs (estimated) across those infrastructures. The demo includes various database options as well as auto-scaling options.

So what does this functionality give application owners? Choice and visibility. What does it give IT? Governance (control), all made possible by automation.

The next view of the demo takes a step deeper, showing VMware Application Director deploying the sample application (called Project Vulcan in the demo). vCloud Application Director deploys complex application topologies in an automated fashion, and includes integration with tools like Puppet and Chef. Kit points out that what they’re showing isn’t just a vApp, but a “full blown” multi-tier application being deployed end-to-end.

The scripted “banter” between Carl and Kit leads to a review of some of the improvements that were included in the vSphere 5.5 release. Kit ties this back to the demo by calling out the improvements made in vSphere 5.5 with regard to latency-sensitive workloads.

Next they move into a discussion of the networking side of the house. (My personal favorite, but I could be biased.) Kit quickly reviews how NSX works and enables the creation of logical network services that are tied to the lifecycle of the application. Kit shows tasks in vCenter Server that reflect the automation being done by NSX with regard to automatically creating load balancers, firewall rules, logical switches, etc., and then reviews how we need to deploy logical network services in coordination with application lifecycle operations.

At Carl’s prompting, Kit goes yet another level deeper into how network virtualization works. He outlines how NSX eliminates the need to configure the physical network layer to provision new logical networks, and also discusses how NSX can provide logical routing, and they outline the benefits of distributed east-west routing (when routing occurs locally within the hypervisor). This, naturally, leads into a discussion of the distributed firewall functionality present in NSX, where firewall functionality occurs within the hypervisor, closest to the VMs. Following the list of features in NSX, Carl brings up load balancing, and Kit shows how load balancing works in NSX.

This leads into a customer testimonial video from WestJet, who discusses how they can leverage NSX’s distributed east-west firewalling to help better control and optimize traffic patterns in the data center. WestJet also emphasizes how they can leverage their existing networking investment while still deriving tremendous value from deploying NSX and network virtualization.

Next up in the demo is a migration from a “traditional” virtual network into an NSX logical network, and Kit shows how the migration is accomplished via a vMotion operation. This leads into a discussion of how VMware can not only do “V2V” migrations into NSX logical networks, but also “P2V” migrations using NSX’s logical-to-physical bridging functionality.

That concludes the networking section of the demo, and leads Carl and Kit into a storage-focused discussion centered around Carl’s mythical Project Vulcan. The discussion initially focuses on VMware VSAN, and how IT can leverage VSAN to help address application provisioning. The demo shows how VSAN can dynamically expand capacity by adding another ESXi host in the cluster; more hosts means more capacity for the VSAN datastore. Carl says that Kit has shown him simplicity, scalability, but not resiliency. This leads Kit to a slide that shows how VSAN ensures resiliency by maintaining multiple copies of data within a VSAN datastore. If some part of the local storage backing VSAN fails, VSAN will automatically copy the data elsewhere so that the policy around how many copies of the data is maintained and enforced.

Following the VSAN demo, Carl and Kit move into a demo of a few end-user computing demonstrations, showing application access via Horizon Workspace. Kit wraps up his time on stage with a brief video—taken from “When Harry Met Sally,” if I’m not mistaken—that describes how demanding the line of business can be. The wrap-up to the demo was quite natural feeling and demonstrated some good chemistry between Kit and Carl.

Next up on the stage is Joe Baguley, CTO of EMEA, to discuss operations and operational concerns. Joe reviews why script- and rules-based management isn’t going to work in the new world, and why the world needs to move toward policy-based automation and management. This leads into a demo, and Joe shows—via vCAC—how vCenter Operations has initiated a performance remediation operation via the auto scale-out feature that was enabled when the application was provisioned. The demo next leads into a more detailed review of application performance via vCenter Operations.

Joe reviews three key parts of automated operations:

  1. (missed this one, sorry)
  2. Intelligent analytics
  3. Visibility into application performance

Next, Joe shows how vCenter Operations is integrating information from a variety of partners to help make intelligent recommendations, one of which is that Carl should change the storage tier based on the disk I/O requirements of his Project Vulcan application. vCAC will show the estimated cost of that change, and when the administrator approves that change, vSphere will leverage Storage vMotion to migrate to a new storage tier.

The discussion between Carl and Joe leads up to a demo of VMware Log Insight, where Joe shows events being pulled from a wide variety of sources to help drill down to the root cause of the storage issue in the demonstration. VMworld attendees (or possibly anyone, I guess) are encouraged to try out Log Insight by simply following @VMLogInsight on Twitter (they will give 5 free licenses to new followers).

Next up in the demo is a discussion of vCloud Hybrid Service, showing how the vSphere Web Client can be used to manage templates in vCHS. Joe brings the demo full-circle by taking us back to vCAC to deploy Project Vulcan into vCHS via vCAC. Carl reviews some of the benefits of vCHS, and asks Joe to share a few use cases. Joe shares that test/dev, new applications (perhaps built on Cloud Foundry?), and rapid capacity expansion are good use cases for vCHS.

Carl wraps up the day 2 keynote by summarizing the technologies that were displayed during today’s general session, and how all these technologies come together to help organizations deliver IT-as-a-service (ITaaS). Carl also makes commitments that VMware’s SDDC efforts will protect and leverage customers’ existing investments and help leverage existing skill sets. He closes the session with the phrase, “Champions drive change, so go drive change, and defy convention!”

And that concludes the day 2 keynote.

Tags: , , , ,

This is a liveblog of the day 1 keynote for VMworld 2013. Depending on network connectivity, this might not get published until after the keynote is finished. The keynote starts, as usual, with a snazzy marketing video.

After the video wraps up, Robin Matlock takes the stage. Robin invites some 10-year veterans of VMworld to share memorable moments from the previous 10 VMworlds.

From there, she launches into a discussion of the “customer virtualization journey”; it’s the same 3-step journey of IT production, business production, and IT-as-a-Service that we’ve heard about for several years. This leads to a talk about VMware’s 10 years of innovation, with products and features like ESX, vMotion, DRS, Storage vMotion, and more. Robin lays the foundation for discussing more than just compute virtualization, alluding to innovation in areas other than compute. She then introduces Pat Gelsinger, VMware CEO.

Pat takes the stage and starts talking about the different “eras” of computing: mainframe, client/server, and now the mobile/cloud era. As we are moving into the mobile/cloud era, Pat thinks that four trends are shaping this era: social, mobile, cloud, and big data. Pat believes that it’s all about the applications, and that enterprise applications are becoming more like consumer applications. He gives an example of such an app—an app that leverages cloud-based resources to deliver DNA visualizations to mobile devices.

After a brief foray through the potential “role models” for IT professionals, Pat takes us into a discussion of three imperatives: virtualization beyond just compute, automation taking over IT management, and ubiquitous hybrid cloud. He starts discussing these imperatives, and focuses on the expansion of virtualization beyond just compute. This is a software-defined data center discussion—compute virtualization, storage virtualization, network virtualization, and management.

Next up is a customer testimonial from Columbia Sportswear, who talks about how they’ve embraced the idea of the software-defined data center.

Following the video, Pat launches into a more in-depth discussion of the four pillars of the software-defined data center. Specifically, he starts with a discussion of compute virtualization; in particular, he announces the general availability of vSphere 5.5 and vCloud Suite 5.5. He runs through a brief list of new features: 2x the number of cores, 2x the number of vCPUs, 2x the performance, 32x the size of virtual disks, etc. He pays particular attention to mission-critical apps on VMware, calling out Oracle, SAP, and others. Pat also calls out App-Aware HA and Big Data Extensions as new features that help reinforce the message “Apps love vSphere.”

Of course, it’s not just about today’s apps, but also next-generation apps. Pat announces the availability of Cloud Foundry on vSphere and on vCloud Hybrid Service (vCHS). Cloud Foundry is a well-regarded Platform-as-a-Service (PaaS). This gives rise to the message “Even more apps love vSphere.”

This leads Pat into a discussion of VMware’s storage-focused features. He says that storage is important, but complex, and must address not only the requirements of today’s apps but also the requirements of the next generation of applications. He discusses Storage vMotion, Storage DRS, VASA, and VAAI as milestones along the way to software-defined storage. But what is software-defined storage?

  • First, the control plane must be policy-driven.
  • Second, the data plane must be virtualized.
  • Third, application-specific data services must be virtualized and abstracted from the hardware arrays.

These three components make up software-defined storage. Pat announces VSAN, Virtual SAN, technology that extends the hypervisor to enable software-defined storage. VSAN is in general public beta, and GA is expected in conjunction with the first update to vSphere 5.5 in the first half of 2014. Pat also makes mention of vVols, vSphere Read Flash Cache, and Virsto. However, he still doesn’t provide a clear path on how these different technologies will meld together (if they will indeed meld together) in future releases.

Pat reinforces the close partnership that exists between VMware and storage industry partners as the key parts of software-defined storage.

Next, Pat moves into a discussion of extending virtualization to networking. Network virtualization and SDN are the next step to the software-defined data center. He announces VMware NSX, which is VMware’s key network virtualization platform. It will support any application, any hypervisor, any cloud management platform, and any physical network hardware. It is the culmination of the merger between VMware’s own vCNS and Nicira’s NVP technology.

Martin Casado now takes the stage. Martin provides a brief background of what led to his work in networking, and then dives into a more in-depth discussion of network virtualization. He provides an overview of network virtualization, emphasizing the importance of how NSX changes the operational model of the data center. After a brief discussion of the important role of the virtual switch—there are now more virtual switch ports than physical switch ports in data centers—Martin invites three network virtualization customers to join him and Pat on the stage: eBay, Citi, and GE.

eBay discusses how they already have about 3,000 VMs running on VMware’s network virtualization software, and no change to the physical infrastructure was required to implement it. Next, Citi—their tagline is “Citi never sleeps”—talks about how they’ve virtualized over 50% of their server workloads and are now moving into network virtualization. Citi plans to use network virtualization to provide a higher level of security and isolation on a multi-tenant private cloud. Finally, GE talks about network virtualization has been a key part of helping them apply the same technologies and same automation techniques to networking as they’ve done to compute workloads.

Following the customer discussion, Pat announces partner integrations with VMware NSX. He doesn’t go into any specific details, but shows a slide with a whole bunch of logos, and encourages attendees to see the partner integrations in the Solutions Exchange.

Pat now shifts his focus to management and automation. As he discusses cloud management, Pat takes a moment to discuss VMware and OpenStack, and discusses how VMware is “the best choice for OpenStack.” That leads to a discussion of hybrid cloud, which in turn leads to a discussion of VMware’s vCloud Hybrid Service (vCHS). In particular, Pat calls out common management tools, common networking, common security, and common support. All of this is possible with compromising SLAs, regulatory requirements, or anything else. Pat announces the general availability of vCHS, available to all customers and partners. He refers to it as the “first” true hybrid cloud.

Bill Fathers now takes the stage. Bill takes a moment to introduce himself; he comes from Savvis and has 10 years of experience delivering cloud services. Pat asks him: why join VMware? Bill believes VMware to be “uniquely positioned” to be a leader in delivering hybrid cloud services. Early access to vCHS was announced in June, and according to Bill the team gained very valuable insight as a result of the early access program. Bill starts discussing some customers of vCHS:

  • Harley Davidson Dealer Services (HDDS), which used vCHS to deploy a new mobile-friendly app
  • The Apollo Group discusses how they expand some of their workloads into vCHS to save money on capital expenditures without having to change the tools they use to manage it

Some of the next new services coming to vCHS include:

  • DRaaS (Disaster Recovery as a Service)
  • Cloud Foundry (as mentioned by Pat earlier)
  • Desktop as a Service (DaaS), based on VMware’s Horizon products

Where are the vCHS data centers located? Two new DCs will launch in September: Silicon Valley and Virginia. In October, a new DC will open in Texas. In addition, Savvis will start using vCHS as a delivery method for their own customers. This is a pretty big partnership, in my view.

Pat wraps up with a review of the key components of the software-defined data center, and reinforces the role of the “VMware faithful,” the Champions of the Mobile-Cloud Era. And with that, Pat wraps up the keynote.

Tags: ,

This is part 4 of the Learning NVP blog series. Just to quickly recap what’s happened so far, in part 1 I provided the high-level architecture of NVP and discussed the role of the components in broad terms. In part 2, I focused on the NVP controllers, showing you how to build/configure the NVP controllers and establish a controller cluster. Part 3 focused on NVP Manager, which allowed us to perform a few additional NVP configuration tasks. In this post, I’ll show you how to add hypervisors to NVP so that you can turn up your first logical network.

Assumptions

In this post, I’m using Ubuntu 12.04.2 LTS with the KVM hypervisor. I’m assuming that you’ve already gone through the process of getting KVM installed on your Linux host; if you need help with that, a quick Google search should turn up plenty of “how to” articles (it’s basically a sudo apt-get install kvm operation). If you are using a different Linux distribution or a different hypervisor, the commands you’ll use as well as the names of the prerequisite packages you’ll need to install will vary slightly. Please keep that in mind.

Installing Prerequisites

To get a version of libvirt that supports Open vSwitch (OVS), you’ll want to enable the Ubuntu Cloud Archive. The Ubuntu Cloud Archive is technically intended to allow users of Ubuntu LTS releases to install newer versions of OpenStack and its dependent packages (including libvirt). Instructions for enabling and using the Ubuntu Cloud Archive are found here. However, I’ve found using the Ubuntu Cloud Archive to be an easy way to get a newer version of libvirt (version 1.0.2) on Ubuntu 12.04 LTS.

Once you get the Ubuntu Cloud Archive working, go ahead and install libvirt:

sudo apt-get install libvirt-bin

Next, go ahead and install some prerequisite packages you’ll need to get OVS installed and working:

sudo apt-get install dkms make libc6-dev

Now you’re ready to install OVS.

Installing OVS

Once your hypervisor node has the appropriate prerequisites installed, you’ll need to install an NVP-specific build of OVS. This build of OVS is identical to the open source build in every way except that it includes the ability to create STT tunnels and includes some extra NVP-specific utilities for integrating OVS into NVP. For Ubuntu, this NVP-specific version of OVS is distributed as a compressed tar archive. First, you’ll need to extract the files out like this:

tar -xvzf <file name>

This will extract a set of Debian packages. For ease of use, I recommend moving these files into a separate directory. Once the files are in their own directory, you would install them like this:

cd <directory where the files are stored>
sudo dpkg -i *.deb

Note that if you don’t install the prerequisites listed above, the installation of the DKMS package (for the OVS kernel datapath) will fail. Trying to then run apt-get install <package list> at that point will also fail; you’ll need to run apt-get -f install. This will “fix” the broken packages and allow the DKMS installation to proceed (which it will do automatically).

These particular OVS installation packages do a couple of different things:

  • They install OVS (of course), including the kernel module.
  • They automatically create and configure something called the integration bridge, which I’ll describe in more detail in a few moments.
  • They automatically generate some self-signed certificates you’ll need later.

Now that you have OVS installed, you’re ready for the final step: to add the hypervisor to NVP.

Adding the Hypervisor to NVP

For this process, you’ll need access to NVP Manager as well as SSH access to the hypervisor. I strongly recommend using the same system for access to both NVP Manager and the hypervisor via SSH, as this will make things easier.

Adding the hypervisor to NVP is a two-step process. First, you’ll configure the hypervisor; second, you’ll configure NVP. Let’s start with the hypervisor.

Verifying the Hypervisor Configuration

Before you can add the hypervisor to NVP, you’ll need to be sure it’s configured correctly so I’ll walk you through a couple of verification steps. NVP expects there to be an integration bridge, an OVS bridge that it will control. This integration bridge will be separate from any other bridges configured on the system, so if you have a bridge that you’re using (or will use) outside of NVP this will be separate. I’ll probably expand upon this in more detail in a future post, but for now let’s just verify that the integration bridge exists and is configured properly.

To verify the present of the integration bridge, use the command ovs-vsctl show (you might need to use sudo to get the appropriate permissions). The output of the command should look something like this:

Note the bridge named “br-int”—this is the default name for the integration bridge. As you can see by this output, the integration bridge does indeed exist. However, you must also verify that the integration bridge is configured appropriately. For that, you’ll use the ovs-vsctl list bridge br-int, which will produce output that looks something like this:

Now there’s a lot of goodness here (“Hey, look—NetFlow support! Mirrors! sFlow! STP support even!”), but try to stay focused. I want to draw your attention to the external_ids line, where the value “bridge-id” has been set to “br-int”. This is exactly what you want to see, and I’ll explain why in just a moment.

One final verification step is needed. NVP uses self-signed certificates to authenticate hypervisors, so you’ll need to be sure that the certificates have been generated (they should have been generated during the installation of OVS). You can verify by running ls -la /etc/openvswitch and looking for the file “ovsclient-cert.pem”. If it’s there, you should be good to go.

Next, you’ll need to do a couple of things in NVP Manager.

Create a Transport Zone

Before you can actually add the hypervisor to NVP, you first need to ensure that you have a transport zone defined. I’ll assume that you don’t and walk you through creating one.

  1. In NVP Manager (log in if you aren’t already logged in), select Network Components > Transport Zones.
  2. In the Network Components Query Results, you’ll probably see no transport zones listed. Click Add.
  3. In the Create Transport Zone dialog, specify a name for the transport zone, and optionally add tags (tags are used for searching and sorting information in NVP Manager).
  4. Click Save.

That’s it, but it’s an important bit. I’ll explain more in a moment.

Add the Hypervisor

Now that you’ve verified the hypervisor configuration and created a transport zone, you’re ready to add the hypervisor to NVP. Here’s how.

  1. Log into NVP Manager, if you aren’t already, and click on Dashboard in the upper left corner.
  2. In the Summary of Transport Components section, click the Add button on the line for Hypervisors.
  3. Ensure that Hypervisor is listed in the Transport Node Type drop-down list, then click Next.
  4. Specify a display name (I use the hostname of the hypervisor itself), and—optionally—add one or more tags. (Tags are used for searching/sorting data in NVP Manager.) Click Next.
  5. In the Integration Bridge ID text box, type “br-int”. (Do you know why? It’s not because that’s the name of the integration bridge. It’s because that’s the value set in the external_ids section of the integration bridge.) Be sure that Admin Status Enabled is checked, and optionally you can check Tunnel Keep-Alive Spray. Click Next to continue.
  6. Next, you need to authenticate the hypervisor to NVP. This is a multi-step process. First, switch over to the SSH session with the hypervisor (you still have it open, right?) and run cat /etc/openvswitch/ovsclient-cert.pem. This will output the contents of the OVS client certificate to the screen. Copy everything between the BEGIN CERTIFICATE and the END CERTIFICATE lines, including those lines.
  7. Flip back over to NVP Manager. Ensure that Security Certificate is listed, then paste the clipboard contents into the Security Certificate box. The red X on the left of the dialog box should change to a green check mark, and you can click Next to continue.
  8. The final step is creating a transport connector. Click Add Connector.
  9. In the Create Transport Connector dialog, select STT as the Transport Type.
  10. Select the transport zone you created earlier, if it isn’t already populated in the Transport Zone UUID drop-down.
  11. Specify the IP address of the interface on the hypervisor that will be used as the source for all tunnel traffic. This is generally not the management interface. Click OK.
  12. Click Save.

This should return you to the NVP Manager Dashboard, where you’ll see the Hypervisors line in the Summary of Transport Components go from 0 to 1 (both in the Registered and the Active columns). You can refresh the display of that section only by clicking the little circular arrow button. You should also see an entry appear in the Hypervisor Software Version Summary section. This screen shot shows you what the dashboard would look like after adding 3 hypervisors:

NVP Manager dashboard

(Side note: This dashboard shows a gateway and service node also added, though I haven’t discussed those yet. It also shows a logical switch and logical switch ports, though I haven’t discussed those either. Be patient—they’re coming. I can only write so fast.)

Congratulations—you’ve added your first hypervisor to NVP! In the next part, I’m going to take a moment to remind readers of a few concepts that I’ve covered in the past, and show how those concepts relate to NVP. From there, we’ll pick back up with adding our first logical network and establishing connectivity between VMs on different hypervisors.

Until that time, feel free to post any questions, thoughts, corrections, or clarifications in the comments below. Please disclose vendor affiliations, where appropriate, and—as always—I welcome all courteous comments.

Tags: , , , , , , ,

« Older entries § Newer entries »