VMware

You are currently browsing articles tagged VMware.

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Welcome to part 11 of the Learning NSX blog series, in which I provide a high-level overview of the basics on integrating VMware NSX into an OpenStack deployment using OpenStack Neutron. In case you’re just now catching up on this blog series, I encourage you to visit my Learning NVP/NSX page, which has a brief summary of all the posts in the series.

Now that I shown you how to build all the different components of NSX (NSX controllers, NSX Manager, gateway appliances, and service nodes), it’s time to add integration into a cloud management platform. VMware NSX was designed to be integrated into a cloud management platform. Because VMware NSX offers a full-featured RESTful API, you could—in theory—integrate NSX into just about any cloud management platform. However, for the purposes of this series, I’ll limit the discussion to focus on how one would integrate VMware NSX with OpenStack via the OpenStack Neutron (formerly “Quantum”) project, which provides virtual networking functionality for OpenStack-based clouds.

The challenge in discussing OpenStack-NSX integration is that one must first understand the basics of OpenStack Neutron before looking at how to integrate VMware NSX into Neutron. Therefore, the primary goal of this post in the series is to provide an overview of Neutron, and then discuss how NSX integrates into Neutron. The next post in the series will provide more in-depth technical details on exactly how the integration is configured.

Let’s start with a generic overview of OpenStack Neutron and its components.

Examining OpenStack Neutron Components

OpenStack Neutron itself has a number of different components:

  • The Neutron server, which supplies the Neutron API and is typically—but not required to be—deployed co-resident on an OpenStack “controller” node (not to be confused with an NSX controller).
  • The Neutron DHCP agent, which provides DHCP services for the various logical networks created in Neutron (at least, whenever DHCP is enabled for a logical network).
  • The Neutron L3 agent, which provides L3 (routed) connectivity for Neutron logical networks. This includes both L3 between logical networks as well as L3 in and out of logical networks.
  • The Neutron metadata agent, which is responsible for providing connectivity between instances and the Nova metadata service (for customizing instances appropriately).
  • Finally, for many Open vSwitch (OVS)-based installations, there is the Neutron OVS agent, which provides a way to program OVS to do what Neutron needs it to do.

The Neutron DHCP, L3, and metadata agents are typically—but not required to be—installed on a so-called “network” node. This network node thus provides DHCP services to the various logical networks (often using Linux network namespaces, if the Linux distribution supports them), routed connectivity in and out of logical networks (once again with network namespaces, Linux bridges, and iptables rules), and metadata service connectivity.

From a traffic flow perspective, traffic from one subnet in a logical network to another subnet in a logical network must hairpin through the network node (where the L3 agent resides). Traffic headed out of a logical network must flow through the network node, where iptables rules will perform the necessary network address translation (NAT) functions associated with the use of floating IPs in an OpenStack environment. And, as I’ve already mentioned, the network node provides DHCP services to all the logical networks as well.

This is, by necessity, a very high-level overview of OpenStack Neutron and the core components. Let’s now take a look at how these components are affected when you choose to use VMware NSX with OpenStack Neutron.

Reviewing OpenStack Neutron With NSX

When you’re using VMware NSX as the mechanism behind OpenStack Neutron, some of Neutron’s functionality is provided by NSX itself:

  • You no longer need the L3 agent. L3 connectivity is provided by the NSX gateway appliances and logical gateway services (refer to part 6 and part 9 of the series, respectively).
  • You won’t need the OVS agent on the hypervisors, because the NSX controllers are responsible for configuring/programming OVS on transport nodes. More details on this interaction is provided in part 4 of the series. (Transport nodes is a generic term referring to nodes that participate in the data plane, such as hypervisors, gateways, and service nodes.)
  • You will still need the Neutron server, the DHCP agent, and the metadata agent.

Therefore, if you choose to deploy the OpenStack Neutron components in a “typical” fashion, you’d have a setup something like this:

  • An OpenStack “controller” node would host the Neutron server, which provides the API with which other parts of OpenStack will interact. This node generally does not have OVS installed, but would have the NSX plugin for Neutron installed. This plugin implements the integration between OpenStack and NSX, and will communicate with NSX via the NSX northbound RESTful API.
  • An OpenStack “network” node would host the Neutron DHCP agent and the Neutron metadata agent. This node would have OVS installed, and would be registered into NSX as a hypervisor (even though it is not a hypervisor and will not host any VMs).

At this point, you should have a pretty good understanding of how, at a high level, NSX integrates with and affects OpenStack Neutron. In the next post in the series, I’ll provide more details on exactly how to configure the integration between VMware NSX and OpenStack Neutron.

Tags: , , , , , ,

Welcome to part 10 of the Learning NSX blog series, in which I will walk through adding an NSX service node to your NSX configuration.

In the event you’ve joined this series mid-way, here’s what I’ve covered thus far:

In this installation of the series, I’ll walk you through setting up an NSX service node and adding it to the NSX domain. Before I do that, though, it’s probably useful to set some context around the role a service node plays in an NSX environment.

Reviewing Service Nodes in VMware NSX

VMware NSX offers two different ways of handling BUM (Broadcast, Unknown unicast, and Multicast) traffic:

  • NSX can perform source replication, which means that each hypervisor is responsible for replicating BUM packets and transmitting them onto the logical network(s). In small environments, this is probably fine.
  • NSX can also perform service node replication, which—as you probably guessed—uses dedicated service node appliances to offload BUM packet replication and transmission. (Service nodes also play a role in multi-DC deployments with remote gateways, but that’s a topic for a different day.)

My environment is pretty small and limited on resources, so I don’t really need a service node. However, in the current implementation of the integration between OpenStack Neutron and NSX, it assumes the presence of a service node. There is a workaround (I’ll probably blog about that later), but I figured I would just go ahead and add a service node to make things easier.

Building an NSX Service Node

Like the NSX controllers, the NSX gateways, and NSX Manager, the NSX service node software is distributed as an ISO. To install a service node on a physical server, you’d just burn the ISO to an optical disk and boot the server from the optical disk. From the boot menu, select to perform an automated installation, and in a few minutes you’re done.

While it is possible to run a service node as a VM (that’s what I’m doing), be aware this isn’t a supported configuration. In addition, if you think about it, it’s kind of crazy—you’re building a VM that runs on a hypervisor to offload packet replication from the hypervisor. Doesn’t really make sense, does it?

Once the service node is finished installation, you’re ready to configure the service node and then add it to NSX.

Configuring an NSX Service Node

Like the controllers, the gateway, and NSX Manager, the configuration of an NSX service node is pretty straightforward:

  1. Set a password for the admin user (optional, but highly recommended).

  2. Set the hostname for the service node (also optional, but recommended as well).

  3. Assign IP addresses to the service node.

  4. Configure DNS and NTP settings.

Let’s take a look at each of these steps.

To set the password for the default admin user, just use this command:

set user admin password

You’ll be prompted to supply the new password, then retype it for confirmation. Easy, right? (And pretty familiar if you’ve used Linux before.)

Setting the hostname for the service node is equally straightforward:

set hostname <hostname>

Now you’re ready to assign IP addresses to the service node. Note that I said “IP addresses” (plural). This is because the service node needs to have connectivity on the management network (so that it can communicate with the NSX controller cluster) as well as the transport network (so that it can set up tunnels with other transport nodes, like hypervisors and gateways). Use this command to see the network interfaces that are present in the controller:

show network interfaces

You’ll note that for each physical interface in the system, the NSX service node installation procedure created a corresponding bridge (this is actually an OVS bridge). So, for a server that has two interfaces (eth0 and eth1), the installation process will automatically create breth0 and breth1. Generally, you’ll want to assign your IP addresses to the bridge interfaces, and not to the physical interfaces.

Let’s say that you wanted to assign an IP address to breth0, which corresponds to the physical eth0 interface. You’d use this command:

set network interface breth0 ip config static 192.168.1.5 255.255.255.0

Naturally, you’d want to substitute the correct IP address and subnet mask in that command. Once the interface is configured, you can use the standard ping command to test connectivity (note, though, that you can’t use any switches to ping, as they aren’t supported by the streamlined NSX appliance CLI). For a service node, you’ll want to assign breth0 an IP address on the management network, and assign breth1 an IP address on your transport network.

Note that you may also need to add a default route using this command:

add network route 0.0.0.0 0.0.0.0 <Default gateway IP address>

Assuming connectivity is good, you’re ready to add DNS and NTP servers to your configuration. Use these commands:

add network dns-server <DNS server IP address>
add network ntp-server <NTP server IP address>

Repeat these commands as needed to add multiple DNS and/or NTP servers. If you accidentally fat finger an IP address, you can remove the incorrect IP address using the remove command, like this:

remove network dns-server <Incorrect DNS IP address>

Substitute ntp-server for dns-server in the above command to remove an incorrect NTP server address.

To add a DNS search domain to the service node, use this command:

add network dns-search-domain <Domain name>

If you are using DHCP and your appliance happened to pick up some settings from the DHCP server, you may need to use the clear network dns-servers and/or clear network routes command before you can add DNS servers or routes to the service node.

Once you’ve added IP addresses, DNS servers, NTP servers, and successfully tested connectivity over both the management and transport networks, then you’re ready to proceed with adding the service node to NSX.

Adding Service Nodes to NSX

As with adding a gateway appliance in part 6, you’ll use NSX Manager (which you set up in part 3) to add the new service node to NSX. Once you’ve logged into the NSX Manager web UI via your browser, you’re ready to start the process of adding the service node.

  1. From the NSX Manager web UI, click on the Dashboard link across the top. If you’ve just logged into NSX Manager, you’re probably already at the Dashboard and can skip this step.

  2. In the Summary of Transport Components box, click the Add button on the row for Service Nodes. This opens the Create Service Node dialog box.

  3. In step 1 (Type), the Transport Node Type drop-down should already be set to “Service Node.” Click Next.

  4. Set the display name and (optionally) add one or more tags to the service node object. Click Next to proceed.

  5. Make sure that “Admin Status Enabled” is selected, and leave the other options untouched (unless you know you need to change them). Click Next.

  6. On step 4 (Credentials), you’ll need the SSL security certificate from the service node. Since you have established network connectivity to the service node, just SSH into the new service node and issue show switch certificate. Then copy the output and paste it into the Security Certificate box in NSX Manager. Click Next to continue.

  7. The final step in NSX Manager is to add a transport connector. A transport connector tells NSX how transport nodes can communicate over a transport zone (I described transport zones back in part 5). Click Add Connector, then specify the transport type (which tunneling protocol to use), the transport zone, and the IP address. The IP address you specify should match an IP address you assigned to the service node’s interface on the transport network. Click OK, then click Save.

At this point, you’ll see the number of registered service nodes increment to 1 (assuming this is the first) in the Summary of Transport Components box in NSX Manager. Active, however, will remain zero until you perform the final step.

The final step is performed back on the service node itself. If you opened an SSH session earlier to get the switch certificate, you can just re-use that connection. On the service node, set up communications with the NSX controller cluster using the command set switch manager-cluster W.X.Y.Z, where W.X.Y.Z represents the IP address of the one of the controllers in your controller cluster. This IP address should be reachable across the interface on the service node assigned to management traffic (which would typically be breth0).

Now go back and refresh the Summary of Transport Components box in NSX Manager, and you should see both Registered and Active Service Nodes set to 1 (again, assuming this is the first).

That’s really all there is to it. Given their role, you likely won’t have a lot of interaction with the service nodes directly. Still, if you want to use NSX with OpenStack Neutron today, you’ll want to have a service node present (and, honestly, if you’re using NSX and OpenStack in any sort of production environment, you’re probably big enough to want a service node anyway).

As always, feel free to post any questions, comments, thoughts, or ideas below. All courteous comments, with vendor disclosures where applicable, are welcome.

Tags: , , , , , ,

Welcome to part 9 in the Learning NVP/NSX blog series, in which I’ll discuss adding a gateway service to a logical network.

If you are just now joining me for this series, let me bring you up to speed real quick:

This installation in the series builds upon all the previous articles; in particular, I assume that you have a logical network created and configured and that you have an NSX gateway appliance up running and added to the NSX domain. In this post, I’ll show you how to add a logical gateway service to your network so that you can provide routed (layer 3) connectivity into and out of your logical network.

Before I start, I think it’s important to distinguish between a gateway appliance and a gateway service. A gateway appliance is a physical installation (or a VM; it’s supported as a VM in certain configurations) of the NSX gateway software; this is what I showed you how to set up in part 6 of the series. A gateway service, on the other hand, is a logical construct. Typically, you would have multiple gateway appliances. Then you would create a gateway service that would be instantiated across multiple gateway appliances for redundancy and availability. Gateway services can be either layer 2 (provided bridged connectivity between a logical network and VLANs on the physical network) or layer 3 (providing routed—with or without NAT—connectivity between a logical network and the physical network).

It’s also important to understand the distinction between a gateway service and a logical router. A gateway service is an NSX construct; a logical router, on the other hand, is usually a construct of the cloud management platform (like OpenStack). A single gateway service can host many logical routers.

In this series, I’m only going to focus on layer 3 gateway services (primarily because I have limited resources in my environment and can’t run both layer 2 and layer 3 gateway services).

To create a layer 3 gateway service, you’ll follow these steps from within NSX Manager (formerly NVP Manager, which I showed you how to set up in part 3):

  1. From the menu across the top of the NSX Manager page, select Network Components > Services > Gateway Services. This will take you to a page titled “Network Components Query Results,” where NSX Manager has precreated and executed a query for the list of gateway services. Your list will be empty, naturally.

  2. Click the Add button. This will open the Create Gateway Service dialog.

  3. Select “L3 Gateway Service” from the list. Other options in this list include “L2 Gateway Service” (to create a layer 2 gateway service) and “VTEP L2 Gateway Service” (to integrate a third-party top-of-rack [ToR] switch into NSX). Click Next, or click on the “2. Basics” button on the left.

  4. Provide a display name for the new layer 3 gateway service, then click Next (or click on “3. Transport Nodes” on the left). You can optionally add tags here as well, in case you wanted to associate additional metadata with this logical object in NSX.

  5. On the Transport Nodes screen, click Add Gateway to select a gateway appliance (which is classified as a transport node within NSX; hypervisors are also transport nodes) to host this layer 3 gateway service.

  6. From the Edit Gateway dialog box that pops up, you’ll need to select a transport node, a device ID, and a failure zone ID. The first option, the transport node, is pretty straightforward; this is a gateway appliance on which to host this gateway service. The device ID is the bridge (recall that NSX gateway appliances, by default, create OVS bridges to map to their interfaces) connected to the external network. The failure zone ID lets you “group” gateway appliances with regard to their relationship to the gateway service. For example, you could choose to host a gateway service on a gateway from failure zone 1 as well as failure zone 2. (Failure zones are intended to help represent different failure domains within your data center.)

  7. Once you’ve added at least two gateway appliances as transport nodes for your gateway service, click Save to create the gateway service and return to NSX Manager. That’s it—you’re done!

Note that this is more of an implementation task than an operational task. In other words, you’d generally deploy your gateway services when you first set up NSX and your cloud management platform, or when you are adding capacity to your environment. This isn’t something that you have to do when a cloud tenant (customer) needs a logical router; that’s handled automatically through the integration between NSX and the cloud management platform. As I mentioned earlier, a single layer 3 gateway service could host many logical routers.

In the next installation of the series, I’ll walk you through setting up an NSX service node to offload packet replication (for broadcast, unknown unicast, and multicast traffic).

As always, your feedback is welcome and encouraged, so feel free to speak up in the comments below.

Tags: , , , ,

A couple of weeks ago I had the privilege of joining Richard Campbell on RunAs Radio to talk VMware NSX and network virtualization. If you’d like to hear us get geeky about network virtualization and where it might take our industry, head over and listen to episode 346. I’d love to hear your feedback!

Tags: , , ,

Welcome to Technology Short Take #37, the latest in my irregularly-published series in which I share interesting articles from around the Internet, miscellaneous thoughts, and whatever else I feel like throwing in. Here’s hoping you find something useful!

Networking

  • Ivan does a great job of describing the difference between the management, control, and data planes, as well as providing examples. Of course, the distinction between control plane protocols and data plane protocols isn’t always perfectly clear.
  • You’ve heard me talk about snowflake servers before. In this post on why networking needs a Chaos Monkey, Mike Bushong applies to the terms to networks—a snowflake network is an intricately crafted network that is carefully tailored to utilize a custom subset of networking features unique to your environment. What is the fix—if one exists—to snowflake networks? Designing your network for resiliency and unleashing a Chaos Monkey on it is one way, as Mike points out. A fan of network virtualization might also say that decomposing today’s complex physical networks into multiple simple logical networks on top of a simpler physical transport network—similar to Mike’s suggestion of converging on a smaller set of reference architectures—might also help. (Of course, I am a fan of network virtualization, since I work with/on VMware NSX.)
  • Martijn Smit has launched a series of articles on VMware NSX. Check out part 1 (general introduction) and part 2 (distributed services) for more information.
  • The elephants and mice post at Network Heresy has sparked some discussion across the “blogosphere” about how to address this issue. (Note that my name is on the byline for that Network Heresy post, but I didn’t really contribute all that much.) Jason Edelman took up the idea of using OpenFlow to provide a dedicated core/spine for elephant flows, while Marten Terpstra at Plexxi talks about how Plexxi’s Affinities could be used to help address the problem of elephant flows. Peter Phaal speaks up in the comments to Marten’s article about how sFlow can be used to rapidly detect elephant flows, and points to a demo taking place during SC13 that shows sFlow tracking elephant flows on SCinet (the SC13 network).
  • Want some additional information on layer 2 and layer 3 services in VMware NSX? Here’s a good source.
  • This looks interesting, but I’m not entirely sure how I might go about using it. Any thoughts?

Servers/Hardware

Nothing this time around, but I’ll keep my eyes peeled for something to include next time!

Security

I don’t have anything to share this time—feel free to suggest something to include next time.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this post on getting the most out of HAProxy—in which Twilio walks through some of the configuration options they’re using and why—to be quite helpful. If you’re relatively new to HAProxy, as I am, then I’d recommend giving this post a look.
  • This list is reasonably handy if you’re not a Terminal guru. While written for OS X, most of these tips apply to Linux or other Unix-like operating systems as well. I particularly liked tip #3, as I didn’t know about that particular shortcut.
  • Mike Preston has a great series going on tuning Debian Linux running under vSphere. In part 1, he covered installation, primarily centered around LVM and file system mount options. In part 2, Mike discusses things like using the appropriate virtual hardware, the right kernel modules for VMXNET3, getting rid of unnecessary hardware (like the virtual floppy), and similar tips. Finally, in part 3, he talks about a hodgepodge of tips—things like blacklisting other unnecessary kernel drivers, time synchronization, and modifying the Linux I/O scheduler. All good stuff, thanks Mike!

Storage

  • “Captain KVM,” aka Jon Benedict, takes on the discussion of enterprise storage vs. open source storage solutions in OpenStack environments. One good point that Jon makes is that solutions need to be evaluated on a variety of criteria. In other words, it’s not just about cost nor is it just about performance. You need to use the right solution for your particular needs. It’s nice to see Jon say that if your needs are properly met by an open source solution, then “by all means stick with Ceph, Gluster, or any of the other cool software storage solutions out there.” More vendors need to adopt this viewpoint, in my humble opinion. (By the way, if you’re thinking of using NetApp storage in an OpenStack environment, here’s a “how to” that Jon wrote.)
  • Duncan Epping has a quick post about a VMware KB article update regarding EMC VPLEX and Storage DRS/Storage IO Control. The update is actually applicable to all vMSC configurations, so have a look at Duncan’s article if you’re using or considering the use of vMSC in your environment.
  • Vladan Seget has a look at Microsoft ReFS.

Virtualization

I’d better wrap it up here so this doesn’t get too long for folks. As always, your courteous comments and feedback are welcome, so feel free to start (or join) the discussion below.

Tags: , , , , , , ,

In part 7 of the Learning NVP series, I mentioned that I was planning to transition this series from NVP to NSX through an upgrade. I had an existing NVP installation running (all virtually) inside an OpenStack cloud, and I would just upgrade that to NSX 4.0.0. Here’s a quick update on that plan and the NVP-to-NSX transition.

As I mentioned, I have an installation of NVP 3.1.1 running successfully in a nested (virtualized) environment. (Yes, it is possible to run all of NVP completely virtualized, though we don’t support that for production environments.) Starting with NVP 3.1.x, NVP offered an “Update Coordinator” that coordinated and orchestrated the upgrade of the various components within an NVP domain. Since I was running NVP 3.1.1, I could just use the Update Coordinator to upgrade my installation and walk you (the readers) through the process along the way.

Using the Update Coordinator (which is built into NVP Manager), an NVP upgrade would typically look something like this:

  • You’d log into NVP Manager and go to the Update Coordinator screen.
  • If you hadn’t already, you’d upload the update files (appliance update files and OVS update files) to NVP Manager.
  • Once all the update files were uploaded, you’d select the version to which you’re upgrading and kick it off.
  • NVP Manager itself is upgraded first.
  • Next, the Update Coordinator pushes out the appliance update files (sometimes called NUB files because of their .nub extension) out to all the appliances (service node, gateways, and controllers).
  • Next, the non-hypervisor transport nodes are upgraded (this is the service nodes and gateways).
  • Following that, the hypervisors need to be upgraded, though this isn’t handled by the Update Coordinator. (You could, of course, leverage a tool like Puppet or Chef or similar to help automate this process.)
  • After you’ve verified that the hypervisors have been updated, then the Update Coordinator upgrades the controller nodes.
  • Following the successful upgrade of the controller nodes, there is a cleanup phase and then you’re all set.

This is really high-level and I’m glossing over some details, naturally. Because an NVP upgrade is a pretty big deal—it could have an effect on the network connectivity of all the VMs and hypervisors within the NVP domain—it typically involves lots of planning, lots of testing, proper backups of all the components, and so on. However, since this was a lab environment and not a real production environment, just running through the Update Coordinator should have been fine.

As it turns out, though, I ran into a few problems—not problems with NVP, but problems with how I had deployed it. Basically, I didn’t do my due diligence and read the documentation.

When I first deployed the virtualized NVP appliances, I selected VMs that had a 10GB root disk. While this was enough to get NVP up and running, it turns out that it is not enough space to perform an upgrade. Specifically, it’s not enough space to do an upgrade on the controllers; the transport nodes upgraded successfully. After the installation of the controllers, I was left with only a couple gigabytes of free space remaining. A fair portion of that is taken up then by the appliance update file, and this did not leave enough to actually perform the controller software upgrade.

Unfortunately, there was no easy workaround. Because the NVP controller cluster is scale out and highly available, I could have taken the controllers out (one at a time), rebuilt them with more disk space, and then re-joined the cluster—a rolling upgrade, if you will. However, because NVP 3.1.1 is a much older build of NVP, it wasn’t possible to rebuild the controllers with a matching software version (not easily, anyway).

So, long story short: instead of wasting cycles trying to fix a deployment issue that is completely my fault (and, by the way, completely documented—had I paid closer attention to the documentation I wouldn’t find myself in this position), I’m simply going to rebuild my lab environment from scratch using NSX 4.0.0. I had really hoped to be able to walk you through the upgrade process, but sadly it just doesn’t make sense to do so.

This will be the last post titled “Learning NVP”; moving forward, all future posts will be titled “Learning NSX.” The next post will discuss adding a gateway service to a logical network; this builds on information from part 5 (creating a logical network) and part 6 (adding a gateway appliance).

As always, your feedback is welcome and encouraged, so feel free to speak up in the comments below.

Tags: , , , ,

Welcome to part 6 of the Learning NVP blog series. In this part, I’m going to show you how to add an NVP gateway appliance to your NVP environment. In future posts, you’ll use this NVP gateway to host either L2 or L3 gateway services (more on those in a moment). First, though, let’s take a quick recap of what’s transpired so far:

In this part, I’m going to walk you through setting up an NVP gateway appliance. If you’ll recall from our introductory high-level architecture overview, the role of the gateway is to provide L2 (switched/bridged) and L3 (routed) connectivity between logical networks and physical networks. So, adding a gateway would then enable you to extend the logical network you created in part 4 to include either L2 or L3 connectivity to the outside world.

<aside>Many of you have probably seen some of the announcements from VMworld about NSX integrations from various networking suppliers (Arista, Brocade, Dell, and Juniper, for example). These announcements will allow NSX—which I’ve said before will leverage a great deal of NVP’s architecture—to use these hardware devices as L2 gateways, providing bridged/switched connectivity between logical networks and physical networks.</aside>

This post will focus only on getting the gateway appliance set up; in future posts, I’ll show you how to actually add the L2 or L3 connectivity to your logical network.

Building the NVP Gateway

The NVP gateway software is distributed as an ISO, like the NVP controller software. You’d typically install this software on a bare metal server, though with recent releases of NVP it is supported to install the gateway into a VM (refer to the latest NVP release notes for more details). As with the NVP controllers and NVP Manager, the gateway is built on Ubuntu 12.04, and the installer process is completely automated. Once you boot from the ISO, the installation will proceed automatically; when completed, you’ll be left at the login prompt.

Configuring the NVP Gateway

Once the NVP gateway software is installed, configuring the gateway is really straightforward. In fact, it feels a lot like configuring NVP controllers (I suspect this is by design). Here are the steps:

  1. Set the password for the admin user (optional, but highly recommended).

  2. Set the hostname for the gateway appliance (also optional, but strongly recommended).

  3. Configure the network interfaces; you’ll need management, transport, and external connectivity. (I’ll explain those in more detail later.)

  4. Configure DNS and NTP settings.

Let’s take a closer look at these steps. The first step is to set the password for the admin user, which you can accomplish with this command:

set user admin password

From here, you can proceed with setting the hostname for the gateway:

set hostname <hostname>

(So far, these commands should be pretty familiar. They are the same commands used when you set up the NVP controllers and NVP Manager.)

The next step is configure network connectivity; you’ll start by listing the available network interfaces with this command:

show network interfaces

As you’ve seen with the other NVP appliances, the NVP gateway software builds an Open vSwitch (OVS) bridge for each physical interface. In the case of a gateway, you’ll need at least three interfaces—a management interface, a transport network interface, and an external network interface. The diagram below provides a bit more context around how these interfaces are used:

NVP gateway appliance interfaces

Since these interfaces have very different responsibilities, it’s important that you properly configure them. Otherwise, things won’t work as expected. Take the time to identify which interface listed in the show network interfaces output corrsponds to each function. You’ll first want to establish management connectivity, so that should be the first interface to configure. Assuming that breth1 (the bridge matching the physical eth2 interface) is your management interface, you’ll configure it using this command:

set network interface breth1 ip config static 192.168.1.12 255.255.255.0

You’ll want to repeat this command for the other interfaces in the gateway, assigning appropriate IP addresses to each of them.

You may also need to configure the routing for the gateway. Check the routing table(s) with this command:

show network routes

If there is no default route, you can set one using this command:

add network route 0.0.0.0 0.0.0.0 <Default gateway IP address>

Once the appropriate network connectivity has been established, then you can proceed with the next step: adding DNS and NTP servers. Here are the commands for this step:

add network dns-server <DNS server IP address>
add network ntp-server <NTP server IP address>

If you accidentally fat-finger an IP address or hostname along the way, use the remove network dns-server or remove network ntp-server command to remove the incorrect entry, then re-add it correctly with the commands above.

Congrats! The NVP gateway appliance is now up and running. You’re ready to add it to NVP. Once it’s added to NVP, you’ll be able to use the gateway appliance to add gateway services to your logical networks.

Adding the Gateway to NVP

To add the new gateway appliance to NVP, you’ll use NVP Manager (I showed you how to set up NVP Manager in part 3 of the series). Once you’ve opened a web browser, navigated to the NVP Manager web UI, and logged in, then you can start the process of adding the gateway to NVP.

  1. Once you’re logged into NVP Manager, click on the Dashboard link across the top. (If you’re already at the Dashboard, you can skip this step.)

  2. In the Summary of Transport Components box, click the Connect & Add Transport Node button. This will open the Connect to Transport Node dialog box.

  3. Supply the management IP address of the gateway appliance, along with the appropriate username and password, then click Connect.

  4. After a moment, the Connect to Transport Node dialog box will show details of the gateway appliance, such as the interfaces, the bridges, the NIC bonds (if any), and the gateway’s SSL certificate. Click Configure at the bottom of the dialog box to continue.

  5. Supply a display name (something like nvp-gw–01) and, optionally, one or more tags. Click Next.

  6. Unless you know you need to select any of the options on the next screen (I’ll try to cover them in a later blog post), just click Next.

  7. On the final screen, you’ll need to establish connectivity to a transport zone. You’ll want to select the appropriate interface (in my example environment, it was breth2) and the appropriate encapsulation protocol (STT is generally recommended for connectivity back to hypervisors). Then select the appropriate transport zone from the drop-down list. In the end, you’ll have a screen that looks something like this (note that your interfaces, IP addresses, and transport zone information will likely be different):

  8. Adding a gateway to NVP

  9. Click Save to finish the process. The number of gateways listed in the Summary of Transport Components box should increment by 1 in the Registered column. However, the Active column will remain unchanged—that’s because there’s one more step needed.

  10. Back on the gateway appliance itself, run this command (you can use the IP address of any controller in the NVP controller cluster):

  11. set switch manager-cluster <NVP controller IP address>
  12. Back in NVP Manager, refresh the Summary of Transport Components box (there’s a small refresh icon in the corner), and you’ll see the Active column update to show the gateway appliance is now registered and active in NVP.

That’s it—you’re all done adding a gateway appliance to NVP. In future posts, you’ll leverage the gateway appliance to add L2 (bridged) and L3 (routed) connectivity in and out of logical networks. First, though, I’ll need to address the transition from NVP to NSX, so look for that coming soon. In the meantime, feel free to post any questions, thoughts, or suggestions in the comments below. I welcome all courteous comments (even if you disagree with something I’ve said!).

Tags: , , , ,

Welcome to Technology Short Take #36. In this episode, I’ll share a variety of links from around the web, along with some random thoughts and ideas along the way. I try to keep things related to the key technology areas you’ll see in today’s data centers, though I do stray from time to time. In any case, enough with the introduction—bring on the content! I hope you find something useful.

Networking

  • This post is a bit older, but still useful in the event if you’re interested in learning more about OpenFlow and OpenFlow controllers. Nick Buraglio has put together a basic reference OpenFlow controller VM—this is a KVM guest with CentOS 6.3 with the Floodlight open source controller.
  • Paul Fries takes on defining SDN, breaking it down into two “flavors”: host dominant and network dominant. This is a reasonable way of grouping the various approaches to SDN (using SDN in the very loose industry sense, not the original control plane-data plane separation sense). I’d like to add to Paul’s analysis that it’s important to understand that, in reality, host dominant and network dominant systems can coexist. It’s not at all unreasonable to think that you might have a fabric controller that is responsible for managing/optimizing traffic flows across the physical transport network/fabric, and an overlay controller—like VMware NSX—that integrates tightly with the hypervisor(s) and workloads running on those hypervisors to create and manage logical connectivity and logical network services.
  • This is an older post from April 2013, but still useful, I think. In his article titled “OpenFlow Test Deployment Options“, Brent Salisbury—a rock star new breed network engineer emerging in the new world of SDN—discusses some practical deployment strategies for deploying OpenFlow into an existing network topology. One key statement that I really liked from this article was this one: “SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking.” New technologies don’t make talented folks who embrace change obsolete; if anything, these new technologies make them more valuable.
  • Great post by Ivan (is there a post by Ivan that isn’t great?) on flow table explosion with OpenFlow. He does a great job of explaining how OpenFlow works and why OpenFlow 1.3 is needed in order to see broader adoption of OpenFlow.

Servers/Hardware

  • Intel announced the E5 2600 v2 series of CPUs back at Intel Developer Forum (IDF) 2013 (you can follow my IDF 2013 coverage by looking at posts with the IDF2013 tag). Kevin Houston followed up on that announcement with a useful post on vSphere compatibility with the E5 2600 v2. You can also get more details on the E5 2600 v2 itself in this related post by Kevin as well. (Although I’m just now catching Kevin’s posts, they were published almost immediately after the Intel announcements—thanks for the promptness, Kevin!)
  • blah

Security

Nothing this time around, but I’ll keep my eyes posted for content to share with you in future posts.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this refresher on some of the most useful apt-get/apt-cache commands to be helpful. I don’t use some of them on a regular basis, and so it’s hard to remember the specific command and/or syntax when you do need one of these commands.
  • I wouldn’t have initially considered comparing Docker and Chef, but considering that I’m not an expert in either technology it could just be my limited understanding. However, this post on why Docker and why not Chef does a good job of looking at ways that Docker could potentially replace certain uses for Chef. Personally, I tend to lean toward the author’s final conclusions that it is entirely possible that we’ll see Docker and Chef being used together. However, as I stated, I’m not an expert in either technology, so my view may be incorrect. (I reserve the right to revise my view in the future.)

Storage

  • Using Dell EqualLogic with VMFS? Better read this heads-up from Cormac Hogan and take the recommended action right away.
  • Erwin van Londen proposes some ideas for enhancing FC error detection and notification with the idea of making hosts more aware of path errors and able to “route” around them. It’s interesting stuff; as Erwin points out, though, even if the T11 accepted the proposal it would be a while before this capability showed up in actual products.

Virtualization

That’s it for this time around, but feel free to continue to conversation in the comments below. If you have any additional information to share regarding any of the topics I’ve mentioned, please take the time to add that information in the comments. Courteous comments are always welcome!

Tags: , , , , , , , , , , , ,

As most of you probably know, I visit quite a few VMUG User Conferences around the United States and around the world. I’d probably do even more if my calendar allowed, because it’s truly an honor for me to have the opportunity to help educate the VMware user community. I know I’m not alone in that regard; there are numerous VMware “rock stars” (not that I consider myself a “rock star”) out there who also work tirelessly to support the VMware community. One need not look very far to see some examples of these types of individuals: Mike Laverick, William Lam, Duncan Epping, Josh Atwell, Nick Weaver, Alan Renouf, Chris Colotti, Cody Bunch, or Cormac Hogan are all great examples. (And I’m sure there are many, many more I’ve forgotten!)

However, one thing that has consistently been a topic of discussion among those of us who frequent VMUGs has been this question: “How do we get users more engaged in VMUG?” VMUG is, after all, the VMware User Group. And while all of us are more than happy to help support VMUG (at least, I know I am), we’d also like to see more user engagement—more customers speaking about their use cases, their challenges, the things they’ve learned, and the things they want to learn. We want to see users get connected with other users, to share information and build a community-based body of knowledge. So how can we do that?

As I see it, there is a variety of reasons why users don’t volunteer to speak:

  • They might be afraid of public speaking, or aren’t sure how good they’ll be.
  • They feel like the information they could share won’t be helpful or useful to others.
  • They aren’t sure how to structure their presentation to make it informative yet engaging.

We (meaning a group of us that support a lot of these events) have tossed around a few ideas, but nothing has ever really materialized. Today I hope to change all that. Today, I’m announcing that I will personally help mentor 5 different VMware users who are willing to step up and volunteer to speak for the first time at a local VMUG meeting in the near future.

So what does this mean?

  • I will help you select a topic on which to speak (in coordination with your local VMUG leader).
  • I will provide guidance and feedback on gathering your content.
  • I will review and provide feedback and suggestions for improving your presentation.
  • If desired, I will provide tips and tricks for public speaking.

And I’m calling on others within the VMUG community who are frequent speakers to do the same. I think that Mike Laverick might have already done something like this; perhaps the others have as well. If so, that’s awesome. If not, I challenge you, as someone viewed in a technical leadership role within the VMware and VMUG communities, to use that leadership role in a way that I hope will reinvigorate and renew user involvement and participation in the VMware/VMUG community.

If you’re one of the 5 people who’s willing to take me up on this offer, the first step is contact me and your local VMUG leader and express your interest. Don’t have my e-mail address? Here’s your first challenge: it’s somewhere on this site.

If you’re already a frequent speaker at VMUGs and you, too, want to help mentor other speakers, you can either post a comment here to that effect (and provide people with a way of getting in touch with you), or—if you have your own blog—I encourage you to make the same offer via your own site. Where possible, I’ll try to update this (or you can use trackbacks) so that readers have a good idea of who out there is willing to provide assistance to help them become the next VMUG “rock star” presenter.

Good luck, and I look forward to hearing from you!

UPDATE: A few folks have noted that all the names I listed above are VMware employees, so I’ve added a couple others who are not. Don’t read too much into that; it was all VMware employees because I work at VMware, too, and they’re the ones I communicate with frequently. There are lots of passionate and dedicated VMUG supporters out there—you know who you are!

Also, be sure to check the comments; a number of folks are volunteering to also mentor new speakers.

Tags: , ,

« Older entries