You are currently browsing articles tagged Virtualization.

Welcome to Technology Short Take #46. That’s right, it’s time for yet another collection of links and articles from around the Internet on various data center-related technologies, products, projects, and efforts. As always, there is no rhyme or reason to my collection; this is just a glimpse into what I’ve seen over the past few weeks. I hope you are able to glean something useful.


  • This post by Matt Oswalt—the first in a series, apparently—provides a great introduction to 5 development tools for network engineers. I’ve already increased my usage of Git in an effort to become more fluent with this very popular version control tool, and I was already planning on exploring Jinja2 as well (these are both mentioned in Matt’s article). This is a really useful post and I’m looking forward to future posts in this series.
  • Matt also recently posted part 4 (of 5) in his series on SDN protocols; this post covers OpFlex and declarative networking.
  • It was good to read this post on Cumulus Linux first impressions by Jeremy Stretch. I’m a fan of Cumulus, but I’m admittedly a Linux guy (see here) so you might say I’m a bit biased. Jeremy is a “hard-core” networking professional, and so hearing his feedback on Cumulus Linux was, in my opinion, useful. I like that Jeremy was completely honest: “I’m not going to lie: Cumulus Linux was not immediately appealing to me.” I highly encourage reading this article.
  • If you’re interested in more details on how NSX handles ARP suppression, Dmitri Kalintsev has a post just for you. Dmitri has some other great NSX-related content as well.


  • While all the attention is “up the stack,” there are still some occasions when you need to worry about the details in the hardware. Kevin Houston’s recent article on selecting the right memory for your blade server is one such example.
  • “Junk-box infrastructure,” eh? Interesting thought. There’s no doubt Greg Ferro had to be involved somehow in this discussion; this rings of the “post-scarcity” discussions he and I had at IDF 2014 in September.


Nothing this time around, but I’ll stay tuned for material to include next time. You’re also invited to share relevant links or articles in the comments!

Cloud Computing/Cloud Management

  • Jay Pipes has an excellent and well-written post on the core of OpenStack. I really appreciate Jay’s focus on what’s beneficial to the users of OpenStack: the cloud operators, the end users/consumers, and the developers building applications on top of OpenStack.
  • Andy Bruce has an article on adding external networks to Neutron with GRE.
  • Previous Technology Short Takes have mentioned CoreOS and Kubernetes in the same sentence, but as far as I can tell I haven’t pointed readers to this two part series on running Kubernetes directly on CoreOS. See part 1 and part 2 for the full details. (Nice use of VMware Fusion in part 2, by the way.)
  • Here’s an article on using Terraform in conjunction with Docker and Digital Ocean. Terraform looks interesting, but I wish it would add an OpenStack provider.
  • Here’s a walkthrough to running containers on vCloud Air.
  • VMware, Vagrant, and Docker together is the subject of this blog post by Fabio Rapposelli. This is useful information if you are looking to combine these technologies in a useful way.
  • As can be expected given the very recent release of Juno, the fall 2014 release of OpenStack, a number of Juno-specific “how to install” pages are popping up. Here’s one such example. Much of the content is similar to previous “how to install” guides that I’ve seen, but it might be useful to a few folks out there.

Operating Systems/Applications

  • I’m seriously considering using the information in this article on an IRC proxy for myself. I find it enormously helpful to stay connected to various open source-related IRC channels, but staying logged while on the move is, for all intents and purposes, impossible. Perhaps the use of an IRC proxy can help. Anyone else out there using a setup like this?
  • Did you happen to notice that CoreOS is now available on Microsoft Azure? It makes me wonder when VMware will announce support for CoreOS on vCloud Air.


  • I’m not sure if this falls into storage or virtualization, but we’ll place it here in the Storage section. Eric Sloof (in conjunction with VMworld TV) has a video introducing readers to CloudVolumes, the relatively recent VMware acquisition that’s being put to work in the end-user computing space.
  • Greg Schulz has a decent two-part (well, three part actually, but it’s only the second two that interest me) series on VVols and storage I/O fundamentals (part 1 and part 2).


  • It will be nice when the virtualization industry converges on some common set of disk formats for virtual machines. OVF/OVA was an attempt, but there’s still some work to be done on that front. Until then, VM converters like version 3.0 of the Microsoft Virtual Machine Converter will keep popping up.
  • William Lam has a great guest post from Peter Bjork on a Mac Mini setup running VSAN.
  • A short while ago I gave you a quick introduction to Vagrant. One of the key components of Vagrant is the box, which is essentially a VM template. Cody Bunch recently published a post on using Packer to make Vagrant boxes, which might come in handy if you want to create your own Vagrant boxes.
  • Here’s a quick reminder from Ben Armstrong that if you want to use the Microsoft Virtual Machine Converter 3.0, you’ll need to be sure to unblock WMI in the Windows Firewall.
  • Want to install ESXi 5.5 Patch03 on the new Mac Pro? William Lam shows you how.
  • Installing CoreOS on vSphere got a bit easier, thanks to the inclusion of Open VM Tools in CoreOS Alpha 490.0.0 and a new script by William Lam. Here’s the details. (I guess William has been doing some super-useful stuff, since I keep referencing his links here. Keep up the great work, William!)
  • Gabrie van Zanten brings up a flaw between VMware Auto Deploy (in vSphere 5.1) and Microsoft Cluster Server (MSCS). What’s the flaw, you say? The two won’t work together.

I have more articles in my bucket labeled “Articles to blog about,” but I’ll save those for some other time in the interest of keeping this from getting overly long (which it probably already is). Until next time!

Tags: , , , , , , , , ,

This is part 18 of the Learning NSX blog series, in which I talk about using layer 3 (L3) routing with VMware NSX but without network address translation (NAT). This post describes a configuration that offers yet another connectivity option for OpenStack cloud administrators and operators.

In part 6, I showed you how to add a gateway appliance to your NSX installation. Part 9 leveraged the gateway appliances to create a L3 gateway service, which—as I explained in part 15—provides the functionality for logical routers in OpenStack. (Logical routing was covered in part 14.) Part 16 expanded the routing configuration to support multiple external networks. This post expands the options again by showing you how to do logical routing without using network address translation (NAT). Of course, it would probably be helpful to read the entire series; links to all posts can be found on the Learning NVP/NSX page.

As I mentioned, so far you’ve seen three different external connectivity options:

  • Routing (layer 3 connectivity) to a single external network
  • Routing (layer 3 connectivity) to multiple external networks using VLANs
  • Bridging (layer 2 connectivity) between a logical network and a physical broadcast domain

Both of the routed connectivity options you’ve seen so far have involved the use of NAT (specifically, source NAT, or SNAT) on the logical router in order to establish outbound connectivity. You’ve had to use floating IP addresses to establish inbound connectivity to instances. However, NSX is absolutely capable of supporting routed connectivity options that do not involve NAT, allowing cloud administrators and operators to use fully routable IP address spaces inside NSX logical networks.

Assuming that you already have logical routing working, the steps to establish a no-NAT routing configuration are pretty straightforward. Note that because the IP address space used in the logical network must interoperate with the IP addressing scheme used on the public network, this configuration is typically set up by a cloud administrator and not by a tenant.

To set up a no-NAT configuration, follow these steps (all commands are assumed to be run using admin credentials):

  1. Store the tenant’s ID in an environment variable, as you’ll be needing it quite a bit over the next few minutes. You can do that using keystone tenant-list and awk. For example, if the tenant’s name was blue, then you could use keystone tenant-list | awk '/\ blue\ / {print $2}' to find the tenant ID and assign it to an environment variable. The following commands will assume you’ve stored the tenant’s ID as $TID.

  2. Create a logical router and assign it to the tenant. The command to do this is neutron router-create --tenant-id $TID <Name of router>. It might be possible to re-use an existing logical router, but you’re really much better off just creating another logical router for this purpose.

  3. Create a logical network on behalf of the tenant, using neutron net-create --tenant-id $TID <Name of network>.

  4. Create a subnet and associate it with the new logical network you just created. You can do that with this command:

neutron subnet-create --tenant-id $TID --name <Subnet name> --gateway <IP address for logical router> <Name of network> <CIDR>
  1. Attach the logical router to the logical network and associated subnet with neutron router-interface-add <Router ID> <Subnet name>. The IP address of this interface will be the IP address provided to the --gateway parameter when you created the subnet.

  2. Attach the logical router to the external network that will serve as its uplink. This is where you’ll tell Neutron and NSX that NAT should not be active. The command looks like this:

neutron router-gateway-set --disable-snat <Router ID> <External network ID>

You’re almost done—you now have a logical network using IP addresses from a subnet that is routable within your larger network without any use of NAT. However, you still need to let the rest of the network know about this new subnet, and as NSX in multi-hypervisor environments doesn’t yet support dynamic routing via protocols like OSPF and BGP, then you’ll need to do static routing.

To do static routing, you’ll need the IP address assigned to the router during step 6 above. neutron port-list will show you the list of ports in Neutron; depending on the size of your installation, this list might be quite sizable. This command might help:

neutron port-list -c id -c fixed_ips -c device_owner | awk '/network:router_gateway/'

This will show you only the ports that are acting as router gateway ports. In large environments, even this list may not be short enough to know for sure which IP address is assigned to the router. For now, though, let’s assume that you’re able to determine the IP address assigned to the logical router on the external network.

Once you have this address, then it’s just a simple matter of adding that route to other network devices. Often, this is some form of ip route add; the specific commands and syntax will depend on the network device and are outside the scope of this article. Once you get the IP routing table(s) updated on the other network devices, then you’re ready to roll!

I hope you find this article (and this series) helpful. As always, if you have any questions, corrections, clarifications, or concerns, I invite you to speak up in the comments below. All courteous comments are welcome!

Tags: , , , , ,

This is part 17 of the Learning NSX blog series. In this post, I’ll show you how to add layer 2 (L2) connectivity to your NSX environment, and how to leverage that L2 connectivity in an NSX-powered OpenStack implementation. This will allow you, as an operator of an NSX-powered OpenStack cloud, to offer L2/bridged connectivity to your tenants as an additional option.

As you might expect, this post does build on content from previous posts in the series. Links to all the posts in the series are available on the Learning NVP/NSX page; in particular, this post will leverage content from part 6. Additionally, I’ll be discussing using NSX in the context of OpenStack, so reviewing part 11 and part 12 might also be helpful.

There are 4 basic steps to adding L2 connectivity to your NSX-powered OpenStack environment:

  1. Add at least one NSX gateway appliance to your NSX implementation. (Ideally, you would add two NSX gateway appliances for redundancy.)
  2. Create an NSX L2 gateway service.
  3. Configure OpenStack for L2 connectivity by configuring Neutron to use the L2 gateway service you just created.
  4. Add L2 connectivity to a Neutron logical network by attaching to the L2 gateway service.

Let’s take a look at each of these steps. (By the way, if the concept of “L2 connectivity” doesn’t make sense to you, please review part 1 of my “Introduction to Networking” series.)

Adding an NSX Gateway Appliance

I described the process for adding an NSX gateway appliance in part 6 of the series, so refer back to that article for details on how to add an NSX gateway appliance. The process for adding a gateway appliance is the same regardless of whether you’ll use that gateway appliance for L2 (bridged) or L3 (routed) connectivity.

A few things to note:

  • Generally, your gateway appliance will have at least three (3) network interfaces. One of those interfaces will be used for management traffic, one for transport (overlay) traffic, and one for external traffic. You’ll need to assign IP addresses to the management and transport interfaces, but the external interface does not require an IP address.
  • If you are going to use the gateway appliance to provide L2 connectivity to multiple VLANs, you’ll want to ensure that all appropriate VLANs are trunked to the external interface of the gateway appliances. If you are deploying redundant gateway appliances, make sure all the VLANs are trunked to all appliances.

Once you have the gateway appliance built and added to NSX using the instructions in part 6, you’re ready to proceed to the next step.

Creating an NSX L2 Gateway Service

After your gateway appliances (I’ll assume you’re using two appliances for redundancy) are built and added to NSX, you’re ready to create the L2 gateway service that will provide the L2 connectivity in and out of a NSX-backed logical network. This process is similar to the process described in part 9 of the series, which showed you how to add an L3 gateway service to NSX. (If you’re unclear on the difference between a gateway appliance and a gateway service, check out part 15 for a more detailed explanation.)

Before we walk through creating an L2 gateway service, keep in mind that you may connect either an L3 gateway service or an L2 gateway service to a single broadcast domain on the physical network. Let’s say you connect an L3 gateway service to VLAN 100 (perhaps using multiple VLANs as described in part 16). You can’t also connect an L2 gateway service to VLAN 100 as well; you’d need to use a different VLAN on the outside of the L2 gateway service. Be sure to take this fact into account in your designs.

To create an L2 gateway service, follow these steps from within NSX Manager:

  1. From the menu across the top of the NSX Manager page, select Network Components > Services > Gateway Services. This will take you to a page titled “Network Components Query Results,” where NSX Manager has precreated and executed a query for the list of gateway services. Your list may or may not be empty, depending on whether you’ve created other gateway services. Any gateway services that you’ve already created will be listed here.

  2. Click the Add button. This will open the Create Gateway Service dialog.

  3. Select “L2 Gateway Service” from the list. Other options in this list include “L3 Gateway Service” (you saw this in part 9) and “VTEP L2 Gateway Service” (to integrate a third-party top-of-rack [ToR] switch into NSX; you’ll use this in a future post). Click Next, or click on the “2. Basics” button on the left.

  4. Provide a display name for the new L2 gateway service, then click Next (or click on “3. Transport Nodes” on the left). You can optionally add tags here as well, in case you wanted to associate additional metadata with this logical object in NSX.

  5. On the Transport Nodes screen, click Add Gateway to select a gateway appliance (which is classified as a transport node within NSX; hypervisors are also transport nodes) to host this L2 gateway service.

  6. From the Edit Gateway dialog box that pops up, you’ll need to select a transport node and a device ID. The first option, the transport node, is pretty straightforward; this is a gateway appliance on which to host this gateway service. The device ID is the bridge (recall that NSX gateway appliances, by default, create OVS bridges to map to their interfaces) connected to the external network.

  7. Once you’ve added two (2) gateway appliances as transport nodes for your gateway service, click Save to create the gateway service and return to NSX Manager. You can create a gateway service with only a single gateway appliance, but it won’t be redundant and protected against the failure of the gateway appliance.

NSX is now ready to provide L2 (bridged) connectivity between NSX-backed logical networks and external networks connected to the gateway appliances in the L2 gateway service. Before we can leverage this option inside OpenStack, though, we’ll need to first configure OpenStack to recognize and use this new L2 gateway service.

Configure OpenStack for L2 Connectivity

Configuring OpenStack for L2 connectivity using NSX builds upon the specific details presented in part 12 of this series. I highly recommend reviewing that post if you haven’t already read it.

To configure OpenStack to recognize the L2 gateway service you just created, you’ll need to edit the configuration file for the NSX plugin on the Neutron server. In earlier versions of the plugin, this file was called nvp.ini and was found in the /etc/neutron/plugins/nicira directory. (In fact, this is the information I shared with you in part 12.) Newer versions of the plugin, however, use a configuration file named nsx.ini located in the /etc/neutron/plugins/vmware directory. I’ll assume you are using a newer version of the plugin.

Only a single change is needed to nsx.ini in order to configure OpenStack to recognize/use the new L2 gateway service. Simply add the UUID of the L2 gateway service (easily obtained via NSX Manager) to the nsx.ini file as the value for the default_l2_gw_service_uuid setting. (You followed a similar procedure in part 12 as part of the OpenStack integration, but for L3 connectivity that time.) Then restart the Neutron server, and you should be ready to go!

Neutron recognizes L2 gateway services as network gateways, so all the related Neutron commands use the term net-gateway. You can verify that the L2 gateway service is recognized by OpenStack Neutron by running the following command with admin permissions:

neutron net-gateway-list

You should see a single entry in the list, with a description that reads something like “default L2 gateway service” or similar. As long as you see that entry, you’re ready to proceed! If you don’t see that entry, it’s time to check in NSX Manager and/or double-check your typing.

Adding L2 Connectivity to a Neutron Logical Network

With the NSX gateway appliances installed, the L2 gateway service created, and OpenStack Neutron configured appropriately, you’re now in a position to add L2 connectivity to a Neutron logical network. However, there are a few limitations that you’ll want to consider:

  • A given Neutron logical network may be connected to either a logical router (hosted on gateway appliances that are part of an L3 gateway service) or a network gateway (an L2 gateway service), but not both. In other words, you can provide L3 (routed) or L2 (bridged) connectivity into and out of logical networks, but not both simultaneously.
  • Each Neutron logical network may be associated with exactly one broadcast domain on the physical network. Similarly, each broadcast domain on the physical network may be associated with exactly one Neutron logical network. For example, you can’t associate VLAN 100 with both logical network A as well as logical network B.
  • Finally, by default network gateway operations are restricted to users with administrative credentials only. There is a model whereby tenants can have their own network gateways, but for the purposes of this article we’ll assume the default model of provider-supplied gateways.

With these considerations in mind, let’s walk through what’s required to add L2 connectivity to a Neutron logical network.

  1. If you don’t already have a logical network, create one using the neutron net-create command. This can be done with standard tenant credentials.

  2. If you had to create the logical network, create a subnet as well using the neutron subnet-create command. You can leave DHCP enabled on this Neutron subnet, as the Neutron DHCP server (which is an instance of dnsmasq running in a network namespace on a Neutron network node) won’t provide addresses to systems on the physical network. However, the logical network and the physical network are going to be sharing an IP address space, so it would probably be a good idea to control the range of addresses using the --allocation-pool parameter when creating the subnet. As with creating the network, standard tenant credentials are all that are needed here.

  3. You’ll need to get the UUID of the network gateway, which you can do with this command: neutron net-gateway-list | awk '/\ default\ / {print $2}'. (You can also assign this to an environment variable for use later, if that helps you.) You’ll also need the the UUID of the logical network, which you can also store into an environment variable. This command and all subsequent commands require administrative credentials.

  4. Attach the logical network to the network gateway using the neutron net-gateway-connect command. Assuming that you’ve stored the UUID of the network gateway in $GWID and the UUID for the logical network in $NID, then the command you’d use would be neutron net-gateway-connect $GWID $NID --segmentation_type=flat. This command must be done by someone with administrative credentials.

  5. If you are using multiple VLANs on the outside of the network gateway, then you’d replace --segmentation_type=flat with --segmentation_type=vlan and adding another parameter, --segmentation_id= and the appropriate VLAN ID. For example, if you wanted to bridge the logical network to VLAN 200, then you’d use segmentation_type=vlan and segmentation_id=200.

  6. That’s it! You now have your Neutron logical network bridged out to a broadcast domain on the physical network.

If you need to change the mapping between a broadcast domain on the physical network and a Neutron logical network, simply use neutron net-gateway-disconnect to disconnect the existing logical network, and then use neutron net-gateway-connect to connect a different logical network to the physical network segment.

I hope you’ve found this post to be useful. The use of L2 gateways offers administrators and operators a new option for network connectivity for tenants in addition to L3 routing. I’ll explore additional options for network connectivity in future posts, so stay tuned. In the meantime, feel free to share any comments, thoughts, or corrections in the comments below.

Tags: , , , , ,

In this post, I’ll describe a technique I found for simplifying the use of multi-machine Vagrantfiles by extracting configuration data into a separate YAML file. This technique is by no means something that I invented or created, so I can’t take any credit whatsoever; this is an idea I first saw here. I wanted to share it here in the hopes that it might prove useful to a larger audience.

If you aren’t familiar with Vagrant and Vagrantfiles, you might start with my quick introduction to Vagrant.

I found this technique after trying to find a way to simplify the creation of multiple machines using Vagrant. Specifically, I was trying to create multiple instances of CoreOS along with an Ubuntu instance for testing things like etcd, fleet, Docker, etc. The Vagrantfile was getting more and more complex, and making changes (to add another CoreOS node, for example) wasn’t as straightforward as I would have liked.

So what’s the fix? As with other DSLs (domain-specific languages) such as Puppet, the fix was found in separating the data from the code. In Puppet, that means parameterizing the module or class, and I needed to use a similar technique here with Vagrant. So, based on the example provided here, I came up with this Vagrantfile:

(If you can’t see the code block above, please click here.)

You’ll note that there is no specific data in this Vagrantfile; all the pertinent configuration data is found in an external YAML file, named servers.yaml (the name is obviously configurable within the Vagrantfile). The Vagrantfile simply retrieves the data from the YAML file and iterates over it.

Here’s a sample servers.yaml you could use with this Vagrantfile:

(Click here if the code block above isn’t visible.)

Using the sample servers.yaml file shown above, the Vagrantfile would create three VMs, using the CoreOS Alpha Vagrant box, each with 512MB of RAM and the corresponding IP address. To add more systems to the configuration, simply add lines to the YAML file and you’re done. It’s as simple as that.

Clearly, there’s a lot more that could be done with this sort of approach, but I think this bare-bones example should give you an idea of how flexible something like this can be.

For an even more supercharged approach, check out Oh My Vagrant!. It takes the approach I’ve described here to an entirely new level.

Tags: , ,

This is part 16 of the Learning NSX series, in which I will show you how to configure VMware NSX to route to multiple external VLANs. This configuration will allow you to have logical routers that could be uplinked to any of the external VLANs, providing additional flexibility for consumers of NSX logical networks.

Naturally, this post builds on all the previous entries in this series, so I encourage you to visit the Learning NVP/NSX page for links to previous posts. Because I’ll specifically be discussing NSX gateways and routing, there are some posts that are more applicable than others; specifically, I strongly recommend reviewing part 6, part 9, part 14, and part 15. Additionally, I’ll assume you’re using VMware NSX with OpenStack, so reviewing part 11 and part 12 might also be helpful.

Ready? Let’s start with a very quick review.

Review of NSX Gateway Connectivity

You may recall from part 6 that the NSX gateway appliance is the piece of VMware NSX that handles traffic into or out of logical networks. As such, the NSX gateway appliance is something of a “three-legged” appliance:

  • One “leg” (network interface) provides management connectivity among the gateway appliance and the nodes in the NSX controller cluster
  • One “leg” provides connectivity to the transport network, which carries the encapsulated logical network traffic
  • One “leg” is the uplink and provides connectivity to physical networks

That’s the physical architecture. From a more logical architecture, you may recall from part 15 that NSX gateway appliances are combined into an NSX gateway service, and the NSX gateway service hosts one or more logical routers. Neither the NSX gateway appliance nor the NSX gateway service are visible to the consumers of the environment; they are only visible to the operators and/or administrators. Consumers only see logical routers, which also serve as the default gateway/default route/IP gateway to/from their logical networks.

The configurations I’ve shown you/discussed so far have assumed the presence of only a single uplink. NSX is not constrained to having only a single uplink, nor is it constrained to having only a single physical network on an uplink. If you need multiple networks on the outside of an NSX gateway appliance, you can either use multiple uplinks, or you can use multiple VLANs on an uplink. In this post I’ll show you how to use multiple VLANs on the outside. This diagram provides a graphical representation of what the configuration will look like.

Multiple VLANs with NSX Gateways

(Click here for a larger version.)

Setting up this configuration will involve three steps:

  1. Configuring the uplink to carry multiple VLANs.
  2. Verifying the gateway configuration.
  3. Setting up the external networks in OpenStack.

Let’s take a look at each of these sections.

The process for this step will vary, mostly because it involves configuring your physical network to pass the appropriate VLANs to the NSX gateway appliance. I’ve written a few articles in the past that might be helpful here:

Although the titles of some of these articles seem to imply they are VMware-specific, they aren’t—the physical switch configuration is absolutely applicable here.

Verifying the Gateway Configuration

No special configuration is required on the NSX gateway appliance. As you probably already know, the NSX gateway appliance leverages Open vSwitch (OVS). OVS ports are, by default, trunk ports, and therefore will carry the VLAN tags passed by a properly configured physical switch. Further, the OVS bridge for the external uplink (typically breth1 or breth2) doesn’t need an IP address assigned to it. This is because the IP address(es) for logical routing are assigned to the logical routers, not the NSX gateway appliance’s interface. If you do have IP addresses assigned to the external uplink interface, you can safely remove it. If you prefer to leave it, that’s fine too.

As a side note, the NSX gateway appliances do support configuring VLAN sub-interfaces using a command like this:

add network interface <physical interface> vlan <VLAN ID>

Thus far, I haven’t found a need to use VLAN sub-interfaces when using multiple VLANs on the outside of an NSX gateway appliance, but I did want to point out that this functionality does indeed exist.

Setting up the External Networks

This is the only moderately tricky part of the configuration. In this step, you’ll prepare multiple external networks that can be used as uplinks for logical routers.

The command you’ll want to use (yes, you have to use the CLI—this functionality isn’t exposed in the OpenStack Dashboard web interface) looks like this:

neutron net-create <network name> -- 
--router:external=True --provider:network_type l3_ext
--provider:segmentation_id <VLAN ID> --provider:physical_network=<NSX gateway service UUID> --shared=True

For the most part, this command is pretty straightforward, but let’s break it down nevertheless:

  • The router:external=True tells Neutron this network can be used as the external (uplink) connection on a logical router.
  • The provider:network_type l3_ext is an NSX-specific extension that enables Neutron to work with the layer 3 (routing) functionality of the NSX gateway appliances.
  • The provider:segmentation_id portion provides the VLAN ID that should be associated with this particular external network. This VLAN ID should be one of the VLAN IDs that is trunked across the connection from the physical switch to the NSX gateway appliance.
  • The provider:physical_network portion tells OpenStack which specific NSX gateway service to use. This is important to note: this command references an NSX gateway service, not an NSX gateway appliance. Refer to part 15 if you’re unclear on the difference.

You’d repeat this command for each external network (VLAN) you want connected to NSX and usable inside OpenStack.

For each Neutron network, you’ll also need a Neutron subnet. The command to create a subnet on one of these external networks looks like this:

neutron subnet-create <network name> <CIDR>
--name <subnet name> --enable_dhcp=False
--allocation-pool start=<starting IP address>,end=<ending IP address>

The range of IP addresses specified in the allocation_pool portion of the command becomes the range of addresses from this particular subnet that can be assigned as floating IPs. It is also the pool of addresses from which logical routers will pull an address when they are connected to this particular external network.

When you’re done creating an external network and subnet for each VLAN on the outside of the NSX gateway appliance, then your users (consumers) can simply create logical routers as usual, and then select from one of the external networks as an uplink for their logical routers. This assumes you included the shared=True portion of the command when creating the network; if desired, you can omit that and instead specify a tenant ID, which would assign the external network to a specific tenant only.

I hope you find this post to be useful. If you have any questions, corrections, or clarifications, please speak up in the comments. All courteous comments are welcome!

Tags: , , , , , ,

Welcome to Technology Short Take #45. As usual, I’ve gathered a collection of links to various articles pertaining to data center-related technologies for your enjoyment. Here’s hoping you find something useful!


  • Cormac Hogan has a list of a few useful NSX troubleshooting tips.
  • If you’re not really a networking pro and need a “gentle” introduction to VXLAN, this post might be a good place to start.
  • Also along those lines—perhaps you’re a VMware administrator who wants to branch into networking with NSX, or you’re a networking guru who needs to learn more about how this NSX stuff works. vBrownBag has been running a VCP-NV series covering various objectives from the VCP-NV exam. Check them out—objective 1, objective 2, objective 3, and objective 4 have been posted so far.


  • I’m going to go out on a limb and make a prediction: In a few years time (let’s say 3–5 years), Intel SGX (Software Guard Extensions) will be regarded as important if not more important than the virtualization extensions. What is Intel SGX, you ask? See here, here, and here for a breakdown of the SGX design objectives. Let’s be real—the ability for an application to protect itself (and its data) from rogue software (including a compromised or untrusted operating system) is huge.


  • CloudFlare (disclaimer: I am a CloudFlare customer) recently announced Keyless SSL, a technique for allowing organizations to take advantage of SSL offloading without relinquishing control of private keys. CloudFlare followed that announcement with a nitty gritty technical details post that describes how it works. I’d recommend reading the technical post just to get a good education on how encryption and TLS work, even if you’re not a CloudFlare customer.

Cloud Computing/Cloud Management

  • William Lam spent some time working with some “new age” container cluster management tools (specifically, govmomi, govc CLI, and Kubernetes on vSphere) and documented his experience here and here. Excellent stuff!
  • YAKA (Yet Another Kubernetes Article), this time looking at Kubernetes on CoreOS on OpenStack. (How’s that for buzzword bingo?)
  • This analytical evaluation of Kubernetes might be helpful as well.
  • looks interesting; I got a chance to see it live at the recent DigitalOcean-CoreOS meetup in San Francisco. Here’s the announcement post.

Operating Systems/Applications

  • Trying to wrap your head around the concept of “microservices”? Here’s a write-up that attempts to provide an introduction to microservices. An earlier blog post on cloud native software is pretty good, too.
  • Here’s a very nice collection of links about Docker, ranging from how to use Docker to how to use the Docker API and how to containerize your application (just to name a few topics).
  • Here’a a great pair of articles (part 1 and part 2) on microservices and Platform-as-a-Service (PaaS). This is really good stuff, especially if you are trying to expand your boundaries learning about cloud application design patterns.
  • This article by CenturyLink Labs—which has been doing some nice stuff around Docker and containers—talks about how to containerize your legacy applications.
  • Here’s a decent write-up on comparing LXC and Docker. There are also some decent LXC-specific articles on the site as well (see the sidebar).
  • Service registration (and discovery) in a micro-service architecture can be challenging. Jeff Lindsay is attempting to help address some of the challenges with Registrator; more information is available here.
  • Unlike a lot of Docker-related blog posts, this post by RightScale on combining VMs and containers for better cloud portability is a well-written piece. The pros and cons of using containers are discussed fairly, without hype.
  • Single-process containers or multi-process containers? This site presents a convincing argument for multi-process containers; have a look.
  • Tired of hearing about containers yet? Oh, come on, you know you love them! You love them so much you want to run them on your OS X laptop. Well…read this post for all the gory details.


  • The storage aspect of Docker isn’t typically discussed in a lot of detail, other than perhaps focusing on the need for persistent storage via Docker volumes. However, this article from Red Hat does a great job (in my opinion) of exploring storage options for Docker containers and how these options affect performance and scalability. Looks like OverlayFS is the clear winner; it will be great when OverlayFS is in the upstream kernel and supported by Docker. (Oh, and if you’re interested in more details on the default device mapper backend, see here.)
  • This is a nice write-up on Riverbed SteelFusion, aka “Granite.”


  • Azure Site Recovery (ASR) is similar to vCloud Air’s Disaster Recovery service, though obviously tailored toward Hyper-V and Windows Server (which is perfectly fine for organizations that are using Hyper-V and Windows Server). To help with the setup of ASR, the Azure team has a write-up on the networking infrastructure setup for Microsoft Azure as a DR site.
  • PowerCLI in the vSphere Web Client, eh? Interesting. See Alan Renouf’s post for full details.
  • PernixData recently released version 2.0 of FVP; Frank Denneman has all the details here.

That’s it for this time, but be sure to visit again for future episodes. Until then, feel free to start (or join in) a discussion in the comments below. All courteous comments are welcome!

Tags: , , , , , , , , , , , , ,

This post will provide a quick introduction to a tool called Vagrant. Unless you’ve been hiding under a rock—or, more likely, been too busy doing real work in your data center to pay attention—you’ve probably heard of Vagrant. Maybe, like me, you had some ideas about what Vagrant is (or isn’t) and what it does (or doesn’t) do. Hopefully I can clear up some of the confusion in this post.

In its simplest form, Vagrant is an automation tool with a domain-specific language (DSL) that is used to automate the creation of VMs and VM environments. The idea is that a user can create a set of instructions, using Vagrant’s DSL, that will set up one or more VMs and possibly configure those VMs. Every time the user uses the precreated set of instructions, the end result will look exactly the same. This can be beneficial for a number of use cases, including developers who want a consistent development environment or folks wanting to share a demo environment with other users.

Vagrant makes this work by using a number of different components:

  • Providers: These are the “back end” of Vagrant. Vagrant itself doesn’t provide any virtualization functionality; it relies on other products to do the heavy lifting. Providers are how Vagrant interacts with the products that will do the actual virtualization work. A provider could be VirtualBox (included by default with Vagrant), VMware Fusion, Hyper-V, vCloud Air, or AWS, just to name a few.
  • Boxes: At the heart of Vagrant are boxes. Boxes are the predefined images that are used by Vagrant to build the environment according to the instructions provided by the user. A box may be a plain OS installation, or it may be an OS installation plus one or more applications installed. Boxes may support only a single provider or may support multiple providers (for example, a box might only work with VirtualBox, or it might support VirtualBox and VMware Fusion). It’s important to note that multi-provider support by a box is really handled by multiple versions of a box (i.e, a version supporting VirtualBox, a version supporting AWS, or a version supporting VMware Fusion). A single box supports a single provider.
  • Vagrantfile: The Vagrantfile contains the instructions from the user, expressed in Vagrant’s DSL, on what the environment should look like—how many VMs, what type of VM, the provider, how they are connected, etc. Vagrantfiles are so named because the actual filename is Vagrantfile. The Vagrant DSL (and therefore Vagrantfiles) are based on Ruby.

Once of the first things I thought about as I started digging into Vagrant was that Vagrant would be a tool that would help streamline moving applications/code from development to production. After all, if you had providers for Vagrant that supported both VirtualBox and VMware vCenter, and you had boxes that supported both providers, then you could write a single Vagrantfile that would instantiate the same environment in development and in production. Cool, right? In theory this is possible, but in talking with some others who are much more familiar with Vagrant than I am it seems that in practice this is not necessarily the case. Because support for multiple providers is handled by different versions of a box (as outlined above), the boxes may be slightly different and therefore may not produce the exact same results from a single Vagrantfile. It is possible to write the Vagrantfile in such a way as to recognize different providers and react differently, but this obviously adds complexity.

With that in mind, it seems to me that the most beneficial uses of Vagrant are therefore to speed up the creation of development environments, to enable version control of development environments (via version control of the Vagrantfile), to provide some reasonable level of consistency across multiple developers, and to make it easier to share development environments. (If my conclusions are incorrect, please speak up in the comments and explain why.)

OK, enough of the high-level theory. Let’s take a look at a very simple example of a Vagrantfile:

(Click here if you can’t see the code block above.)

This Vagrantfile sets the box (“ubuntu/precise64″), the box URL (retrieves from Canonical’s repository of cloud images), and then sets the “/vagrant” directory in the VM to be shared/synced with the current (“.”) directory on the host—in this case, the current directory is the directory where the Vagrantfile itself is stored.

To have Vagrant then use this set of instructions, run this command from the directory where the Vagrantfile is sitting:

vagrant up

You’ll see a series of things happen; along the way you’ll see a note that the machine is booted and ready, and that shared folders are getting mounted. (If you are using VirtualBox and the box I’m using, you’ll also see a warning about the VirtualBox Guest Additions version not matching the version of VirtualBox.) When it’s all finished, you’ll be deposited back at your prompt. From there, you can easily log in to the newly-created VM using nothing more than vagrant ssh. That’s pretty handy.

Other Vagrant commands include:

  • vagrant halt to shut down the VM(s)
  • vagrant suspend to suspend the VM(s), use vagrant resume to resume
  • vagrant status to display the status of the VM(s)
  • vagrant destroy to destroy (delete) the VM(s)

Clearly, the example I showed you here is extremely simple. For an example of a more complicated Vagrantfile, check out this example from Cody Bunch, which sets up a set of VMs for running OpenStack. Cody and his co-author Kevin Jackson also use Vagrant extensively in their OpenStack Cloud Computing Cookbook, 2nd Edition, which makes it easy for readers to follow along.

I said this would be a quick introduction to Vagrant, so I’ll stop here for now. Feel free to post any questions in the comments, and I’ll do my best to answer them. Likewise, if there are any errors in the post, please let me know in the comments. All courteous comments are welcome!

Tags: , ,

This is a live blog of session DATS004, titled “Bare-Metal, Docker Containers, and Virtualization: The Growing Choices for Cloud Applications.” The speaker is Nicholas Weaver (yes, that Nick Weaver, who now works at Intel).

Weaver starts his presentation by talking about “how we got here”, discussing the various technological shifts that have affected the computing landscape over the years. Weaver includes a discussion of the drivers behind virtualization as well as the pros and cons of virtualization.

That, naturally, leads to a discussion of containers. Containers are not all that new—Solaris Zones is a form of containers that existed back in 2004. Naturally, the recent hype associated with Docker has, according to Weaver, rejuvenated interest in the concept of containers.

Before Weaver gets too far into containers, he first provides a background of some of the core containerization pieces. This includes cgroups (the ability to control resource allocation/utilization), which is built into the Linux kernel. Namespace isolation is also important, which provides full process isolation (so that one process can’t see processes in another namespace). Namespace isolation isn’t just for processes; there’s also isolation for network entities, mounts, and users. LXC is a set of user-space tools that attempted to make using these constructs easier, but it hasn’t (until recently) been easy to really leverage these constructs.

Weaver next takes this relatively abstract discussion and makes it a bit more concrete with a specific example of how a microservice architecture would look under virtualization (OS instance, microservice libraries, and microservice itself) and well as under containers (shared OS instance and shared libraries plus microservice itself). Weaver talks about the “instant start” attribute of a container, but puts that in the context of the lifetime of the workload that’s running in the container. Start-up times don’t really matter for long-lived workloads, but for temporary, ephemeral workloads start-up times do matter. The pattern of “container on VM” is also mentioned by Weaver as another design pattern that some people use.

Next Weaver provides a quick list of pros and cons of containers:

  • Pros: faster lifecycle vs. virtual machines; containers what is running within the OS; ideal for homogenous application stacks on Linux; almost non-existent overhead
  • Cons: very complex to configure (by itself, absent some sort of orchestration system or operating at scale); currently much weaker security isolation than VMs; applications must run on Linux (because Windows doesn’t have the same container technologies)

Next, Weaver transitions the discussion to focus on Docker specifically. Weaver describes Docker as “an easy button for containers,” making the underlying containerization constructs (cgroups, namespaces, etc.) easier to use. Docker is simpler and easier than LXC (where multiple binaries were involved). Weaver believes that Docker images—which he describes as an ordered set of actions to build a container—are the real game-changer. Weaver’s discussion of Docker images leads to a review of a Dockerfile, which is a DSL (domain specific language) for creating Docker images. Docker images are built on a series of layers; underlying layers could be “just” OS images (like Ubuntu or CentOS), but they could also be customized builds that contain applications and/or data.

Image registries are how users can create images and share images with other users. The public Docker Hub is an example of an image registry.

The discussion now transitions into a quick review of the underlying Docker architecture. There is a Docker daemon that runs on Linux; the Docker client can be run elsewhere. The Docker client communicates with the Docker daemon (although you should note that in many cases the daemon listens on a local socket, which means using a Docker client remotely over the network won’t work).

The innovations that Weaver attributes to Docker include: images (like templates for VMs, and the use of copy-on-write makes them behave like code); API and CLI tools for managing container deployments; reduced complexity around deploying and managing containers; and support for namespaces and resource limits.

Weaver provides a more concrete example of how Docker can change a developer’s process for creating code. Here Weaver’s DevOps background really starts to show, as he discusses how Docker and containers would help streamline CI/CD operations.

Next up are the gotchas with containers. Trust is one gotcha; can we trust that one container won’t affect other containers? The answer, according to Weaver, is “it depends.” You still need to follow current recommended practices, such as no root access, host-level patches, auditing, and being aware of the default settings (which might be dangerous, if you aren’t aware). One way to address some of these concerns is to use VMs to provide strong security isolation between containers that need a stronger level of isolation than the standard container mechanisms can provide.

Intel, of course, is working on making containers better:

  • Security (Intel AES-NI, INtel TXT/TCP, Intel SGX)
  • Performance/flexibility (Intel VT-x/VT-d/VT-c)

Weaver wraps up the session with a quick summary of the key points from the session and some Q&A.

Tags: , , , ,

Welcome to Technology Short Take #44, the latest in my irregularly-published series of articles, links, ideas, and thoughts about various data center-related technologies. Enjoy!


  • One of the original problems with the VXLAN IETF specification was that it (deliberately) didn’t include any control plane information; as a result, the process of mapping MAC addresses to VTEPs (VXLAN Tunnel Endpoints) wasn’t defined, and the early implementations relied on multicast to handle this issue. To help resolve this issue, Cumulus Networks (and possibly Metacloud, I’m not sure of their involvement yet) has release an open source project called vxfld. As described in this Metacloud blog post, vxfld is designed to “handle VXLAN traffic from any operationg system or hardware platform that adheres to the IETF Internet-Draft for VXLAN”.
  • Nir Yechiel recently posted part 1 of a discussion on the need for network overlays. This first post is more of a discussion of why VLANs and VLAN-based derivatives aren’t sufficient, and why we should be looking to routing (layer 3) constructs instead. I’m looking forward to part 2 of the series.
  • One ongoing discussion in the network industry these days—or so it seems—is the discussion about the interaction between network overlays and the underlying transport network. Some argue that tight integration is required; others point to streaming video services and VoIP running across the Internet and insist that no integration or interaction is needed. In this post, Scott Jensen argues in favor of the former—that SDN solutions shouldn’t just manage network overlays, but should also manage the configuration of the physical transport network as well. I’d love to hear from more networking pros (please disclose company affiliations) about their thoughts on this matter.
  • I like the distinction made here between network automation and SDN.
  • Need to get a better grasp on OpenFlow? Check out OpenFlow basics and OpenFlow deep-dive.
  • Here’s a write-up on connecting Docker containers using VXLAN. I think there’s a great deal of promise for OVS in containerized environments, but what’s needed is better/tighter integration between OVS and container solutions like Docker.


  • Is Intel having second thoughts about software-defined infrastructure? That’s the core question in this blog post, which explores the future of Intel in a software-defined world and the increasing interest in non-x86 platforms like ARM.
  • On the flip side, proponents who claim that platforms like ARM and others are necessary in order to move forward with SDN and NFV initiatives should probably read this article on 80 Gbps performance from an off-the-shelf x86 server. Impressive.


  • It’s nice to see that work on OpenStack Barbican is progressing nicely; see this article for a quick overview of the project and an update on the status.

Cloud Computing/Cloud Management

  • SDN Central has a nice write-up on the need for open efforts in the policy space, which includes the Congress project.
  • The use of public cloud offerings as disaster recovery targets is on the rise; note this article from Microsoft on how to migrate on-premises workloads to Azure using Azure Site Recovery. VMware has a similar offering via the VMware vCloud Hybrid Service recovery-as-a-service offering.
  • The folks at eNovance have a write-up on multi-tenant Docker with OpenStack Heat. It’s an interesting write-up, but not for the faint of heart—to make their example work, you’ll need the latest builds of Heat and the Docker plugin (it doesn’t work with the stable branch of Heat).
  • Preston Bannister took a look at cloud application backup in OpenStack. His observations are, I think, rational and fair, and I’m glad to see someone paying attention to this topic (which, thus far, I think has been somewhat ignored).
  • Interested in Docker and Kubernetes on Azure? See here and here for more details.
  • This article takes a look at Heat-Translator, an effort designed to provide some interoperability between TOSCA and OpenStack HOT documents for application deployment and orchestration. The portability of orchestration resources is one of several aspects you’ll want to examine as you progress down the route of fully embracing a cloud computing operational model.

Operating Systems/Applications

  • Looks like we have another convert to Markdown—Anthony Burke recently talked about how he uses Markdown. Regular readers of this site know that I do almost all of my content generation using MultiMarkdown (a variation of Markdown with some expanded syntax options). Here’s a post I recently published on some useful Markdown tools for OS X.
  • Good to see that Ivan Pepelnjak thinks infrastructure as code makes sense. I guess that means the time I’ve spent with Puppet (you can browse Puppet-related posts here) wasn’t a waste.
  • I don’t know if I’ve mentioned this before (sorry if that’s the case), but I’m liking this “NIX4NetEng” series going on over at Nick Buraglio’s site (part 1, part 2, and part 3).
  • Mike Foley has a blog post on how to go from zero to Windows domain controller in only 4 reboots. Handy.



  • Running Hyper-V with Linux VMs? Ben Armstrong details what versions of Linux support the various Hyper-V features in this post.
  • Here’s a quick write-up on running VMs with VirtualBox 4.3 on a headless Ubuntu 14.04 LTS server.
  • Nested OS X guest on top of nested ESXi on top of VMware Fusion? Must be something William Lam’s tried. Go have a look at his write-up.
  • Here’s a quick update on Nova-Docker, the effort in OpenStack to allow users to deploy Docker containers via Nova. I’m not yet convinced that treating Docker as a hypervisor in Nova is the right path, but we’ll see how things develop.
  • This post is a nice write-up on the different ways to connect a Docker container to a local network.
  • Weren’t able to attend VMworld US in San Francisco last week? No worries. If you have access to the recorded VMworld sessions, check out Jason Boche’s list of the top 10 sessions for a priority list of what recordings to check out. Or need a recap of the week? See here (one of many recap posts, I’m sure).

That’s it this time around; hopefully I was able to include something useful for you. As always, all courteous comments are welcome, so feel free to speak up in the comments. In particular, if there is a technology area that I’m not covering (or not covering well), please let me know—and suggestions for more content sources are certainly welcome!

Tags: , , , , , , , , , , , , ,

This is a semi-liveblog of the day 2 keynote at VMworld 2014 in San Francisco.

Ben Fathi, CTO of VMware, takes the stage to start the keynote, and goes through a recap of yesterday’s announcements: EVO and EVO:RAIL, vCloud Air rebranding and the availability of new services, VMware Integrated OpenStack (VIO), and new product versions (vCloud Suite 5.8, betas of vSphere and VSAN).

Fathi re-emphasizes that VMware’s SDDC vision is an architecture, one built on the “power of AND.” vCloud Suite (and vCloud Air) is a manifestation of the SDDC vision/architecture. VMware’s SDDC vision/architecture delivers the power of “AND.”

Fathi transfers the baton to Sanjay Poonen, EVP for End-User Computing at VMware. Poonen is a high-energy speaker, and his enthusiasm for end-user computing is clear (and contagious). The world of end-user computing is changing; not only users are changing, but also apps and data are changing. Poonen talks about VMware’s EUC vision: enabling users to work “at the speed of life.” Today’s environments are far more heterogenous than in the past. Poonen uses an example of a rooftop infinity pool in Singapore as a metaphor for the “three foundations” of EUC: SDDC, workspace services (authentication and content collaboration, for example), and then the desktop/mobile/cloud experience.

Poonen goes on to talk about VMware’s desktop leadership. Horizon delivers a unified application publishing and VDI solution. Next is Desktop-as-a-Service (DaaS), via Desktone, and Poonen mentions DaaS via vCloud Air. Next Poonen talks about real-time application delivery via the CloudVolumes acquisition. Finally, Poonen discusses the rich user experience, and he announces a partnership between Google, NVIDIA, and VMware, and rolls into a video about the results of the partnership. The partnership enables users of VMware Horizon View on VMware vSphere on next-generation Google Chromebooks to utilize massively graphics-intensive applications with outstanding performance. (This is a pretty cool announcement, IMHO.)

The second major pillar that Poonen discusses is enterprise mobility management. It’s not just about application management, but also about content management, email management, and device management. Poonen brings onto stage Kevin Ichhpurani from SAP, to discuss a partnership between VMware and SAP. The partnership brings together AirWatch and SAP Secure Mobile for the secure delivery of applications to mobile devices.

The third major pillar that Poonen discusses is content collaboration. This means “anytime/anywhere” access, hybrid deployment, enterprise-grade functionality/support. Poonen talks about how United Airlines is transforming their operations uses mobile devices (supported by Apple devices and AirWatch by VMware). Poonen talks about all the integrations that are present: integrations between SDDC and workspace services, integration between desktop services and content collaboration. Poonen revisits the VMware Workspace Suite announcement from yesterday, drawing a comparison between vCloud Suite for SDDC and the Workspace Suite for EUC.

Poonen mentions that VMware is being designated as the leader by GigaOm and Radicati Group.

At this point, Poonen brings out Kit Colbert, CTO for EUC at VMware. Colbert is well-known to the VMworld crowd and is a veteran at VMware. Colbert starts his discussion using the idea of a “mobile cloud architecture” as the basis for the discussion. Colbert starts the first demo of the day by framing the demo around the concept of a doctor moving through various devices and applications over the course of a day. The demo spans a Mac in the doctor’s office, an iPad that he carries with him, and a thin client leveraging rich 3D graphics functionality. AirWatch Secure Content Locker facilitates some fictional collaboration between Colbert and Poonen (who is also a doctor in this example) across different security zones. The summary of the demo is that all the applications were in one place, data followed the user, and 3D images were available on any device.

Next Colbert takes the audience into a demo of CloudVolumes, and how CloudVolumes makes application delivery fast and seamless (like provisioning applications via AirWatch to a mobile device). In the demo, Colbert provisions Office to a VDI desktop almost instantaneously.

Application delivery is important, but desktop delivery and desktop provisioning is also important. Colbert next dives into more detail on Project Fargo, which enables vastly faster provisioning of virtual desktops by via cloning process. The clone is very lightweight because it shares not only disk but also memory with the original. Colbert begins to tie things today: VMware uses Project Fargo to quickly clone the desktop, and then use CloudVolumes to attach applications to that desktop. When the desktop is no longer needed, the desktop is discarded. This radically simplifies the administration and management of desktops and applications. Colbert name-drops a new project name—Project Meteor—but doesn’t go into great level of detail on exactly what Meteor encompasses.

Poonen and Colbert turn the stage over to Raghu Raghuram, who heads up VMware’s SDDC efforts. Raghuram reviews the achievements of the core components of the SDDC architecture—compute (via vSphere, including the vSphere 6.0 beta announced yesterday), storage (via VSAN), networking (via NSX—go VCDX-NV!), and management (via the newly-rebranded vRealize suite). Once again, Raghuram talks about SDDC as “the power of AND”.

Raghuram next transitions into a review of the “three paths” to SDDC that were introduced by Gelsinger yesterday: build your own, converged infrastructure, and hyper-converged infrastructure. Naturally, this leads to a discussion of the EVO family, especially EVO:RAIL. Fathi takes the stage again and prompts Raghuram into discussing how quickly EVO:RAIL can be turned up (up and running in 15 minutes, scales up to 100 VMs or 250 desktop VMs, supports non-disruptive upgrades), and that leads into a demo of EVO:RAIL. The hardware specification for EVO:RAIL is a 2U, four-node enclosure (think Dell C6100 or similar hardware form factor). EVO:RAIL building blocks will automatically discover each other, allowing users to easily build 16 node clusters (four EVO:RAIL blocks). EVO:RAIL also offers a greatly simplified web-based UI, although the vSphere Web Client is still available for use. EVO:RAIL includes vSphere and VSAN, and includes Log Insight as well.

EVO:RACK is a larger-scale implementation that builds on EVO:RAIL and includes all of the vCloud Suite. It starts at half a rack, and scales to multiple racks (no mention I caught of exactly how many racks). VMware is shooting for being up and running deploying applications in less than 2 hours for EVO:RACK. EVO:RACK, of course, is still in technical preview.

Next Raghuram moves onto talking about VMware Integrated OpenStack (VIO), which was announced yesterday as being available in beta. Over the next few minutes he discusses VMware’s involvement in OpenStack, and reviews why running OpenStack on VMware is the best choice for deploying OpenStack today. It doesn’t hurt that VMware offers a single point of contact for deploying OpenStack on VMware using VIO, although VMware continues to partner with companies like Canonical, HP, SuSE, and Mirantis for OpenStack deployments as well. (HP announced yesterday that HP Helion will support vSphere and NSX.)

From there, Raghuram transitions to a discussion of traditional applications and next-generation applications. This leads to mention of support for 4 vCPUs with VMware Fault Tolerance in the vSphere 6 beta. (This expansion of support is a long time coming.) Cross-vCenter vMotion is also coming in vSphere 6, as well as long-distance vMotion (coast-to-coast vMotion mentioned as an example). Raghuram transitions to Fathi for a discussion of next-generation applications.

Fathi talks about the use of microservices as the basis for next-generation applications, which clearly indicates he will be talking about containers. Fathi paints containers as “having been around for years,” including pointing to a feature in Windows Data Center Server 15 years ago. However, Docker has clearly captured the attention of developers, and so Fathi talks about where Docker fits. This leads into the “containers without compromise” message, which means that running containers on VMs on VMware’s SDDC architecture is the best way to deliver containerized applications and microservices today. Fathi mentions VMware working closely with Docker and Kubernetes, and explains the concept of a Kubernetes pod as a collection of containers. The value that VMware offers in a Docker/Kubernetes environment includes persistent storage via VSAN, pod mobility via vMotion/DRS, enhanced security via the VM operational model, and the rich ecosystem of tools supporting VMware virtualized infrastructure. Fathi re-iterates that SDDC is a single platform for running traditional applications as well as next-generation applications, and reminds attendees that CloudFoundry has supported containers (via Warden) for a number of years. (In other words, containers are not new to VMware.) VMware announces today that VMware is working with Docker, Google, and Pivotal to make containers a “first-class” citizen in the VMware SDDC architecture.

This leads into a demo by Fathi showing the use of vCAC (vRealize Automation?) deploying a blueprint. The demo shows CoreOS and Kubernetes deploying Vulcan, which is a service registration/service discovery tool. Raghuram asks about Project Fargo, and Fathi talks about how Fargo can help make containers even better than on bare metal through the use of technologies like those in Project Fargo.

Next up, Raghuram talks about the vRealize suite of management applications and tools. vRealize will be available as an on-premises implementation as well as available via vCloud Air in a SaaS model. I think that the value of this second approach is possibly being overlooked by a good number of conference attendees, but it is quite a significant announcement in my humble opinion.

Fathi takes over again to discuss the need for policy in the SDDC architecture. Policy can (and should) be applied not only to networking, but also to storage, compute, and other areas. This leads into a demo, hosted within vCAC (vRealize Automation?), and shows how various policies like storage policies are implemented to affect how applications are deployed within an SDDC architecture. (There is, of course, another side of policy that isn’t being mentioned here, and that’s what Congress attempts to address. But that’s another story for another day.) There is a nice demo of integrations between vCAC and NSX, where vCAC creates NSX security groups (with associated firewall rules) on-demand as part of a blueprint. That’s nice, and will be quite useful I think.

Speaking of NSX, Fathi jumps into a discussion of microsegmentation as a key value-add for VMware NSX in data centers. While microsegmentation was possible before VMware NSX, the value that NSX offers is that microsegmentation now becomes more manageable and more operationally feasible than it’s been in the past. Fathi reminds the audience that this functionality can be driven by policy.

Raghuram summarizes the various demonstrations and announcements, and leaves the stage. Fathi then invites Simone Brunozzi, VP and Chief Technologist for Hybrid Cloud. Brunozzi uses his Google Glass as an excuse to show off a mobile application he uses to “check” on his data center. This leads into a demo that shows vCenter Operations Manager to show the health of vCloud Air resources as well as VMware NSX resources—this shows the expanding reach of vCenter Operations Manager and the tight integration between the various product lines that VMware is advancing. This leads into a demonstration of extending a network from your own data center into vCloud Air, and then migrating workloads (via a copy operation) to vCloud Air directly from within the vSphere Web Client. That’s nothing new, according to Fathi, but what is new is the coordination between firewall rules in NSX and firewall rules in vCloud Air to enable full connectivity between on-premises workloads and workloads in vCloud Air.

Brunozzi next demos using vSphere Replication to replicate to a cloud provider for cloud-based recovery. There was no mention I caught of which providers support this, though I’m sure that vCloud Air via Recovery-as-a-Service does.

At this point I needed to leave to get ready for a session, so I wasn’t able to capture the rest of the keynote. I hope that what I was able to capture was useful—feel free to share any corrections or additions in the comments below. Thanks!

Tags: , , , ,

« Older entries