OpenStack

You are currently browsing articles tagged OpenStack.

Welcome to Technology Short Take #46. That’s right, it’s time for yet another collection of links and articles from around the Internet on various data center-related technologies, products, projects, and efforts. As always, there is no rhyme or reason to my collection; this is just a glimpse into what I’ve seen over the past few weeks. I hope you are able to glean something useful.

Networking

  • This post by Matt Oswalt—the first in a series, apparently—provides a great introduction to 5 development tools for network engineers. I’ve already increased my usage of Git in an effort to become more fluent with this very popular version control tool, and I was already planning on exploring Jinja2 as well (these are both mentioned in Matt’s article). This is a really useful post and I’m looking forward to future posts in this series.
  • Matt also recently posted part 4 (of 5) in his series on SDN protocols; this post covers OpFlex and declarative networking.
  • It was good to read this post on Cumulus Linux first impressions by Jeremy Stretch. I’m a fan of Cumulus, but I’m admittedly a Linux guy (see here) so you might say I’m a bit biased. Jeremy is a “hard-core” networking professional, and so hearing his feedback on Cumulus Linux was, in my opinion, useful. I like that Jeremy was completely honest: “I’m not going to lie: Cumulus Linux was not immediately appealing to me.” I highly encourage reading this article.
  • If you’re interested in more details on how NSX handles ARP suppression, Dmitri Kalintsev has a post just for you. Dmitri has some other great NSX-related content as well.

Servers/Hardware

  • While all the attention is “up the stack,” there are still some occasions when you need to worry about the details in the hardware. Kevin Houston’s recent article on selecting the right memory for your blade server is one such example.
  • “Junk-box infrastructure,” eh? Interesting thought. There’s no doubt Greg Ferro had to be involved somehow in this discussion; this rings of the “post-scarcity” discussions he and I had at IDF 2014 in September.

Security

Nothing this time around, but I’ll stay tuned for material to include next time. You’re also invited to share relevant links or articles in the comments!

Cloud Computing/Cloud Management

  • Jay Pipes has an excellent and well-written post on the core of OpenStack. I really appreciate Jay’s focus on what’s beneficial to the users of OpenStack: the cloud operators, the end users/consumers, and the developers building applications on top of OpenStack.
  • Andy Bruce has an article on adding external networks to Neutron with GRE.
  • Previous Technology Short Takes have mentioned CoreOS and Kubernetes in the same sentence, but as far as I can tell I haven’t pointed readers to this two part series on running Kubernetes directly on CoreOS. See part 1 and part 2 for the full details. (Nice use of VMware Fusion in part 2, by the way.)
  • Here’s an article on using Terraform in conjunction with Docker and Digital Ocean. Terraform looks interesting, but I wish it would add an OpenStack provider.
  • Here’s a walkthrough to running containers on vCloud Air.
  • VMware, Vagrant, and Docker together is the subject of this blog post by Fabio Rapposelli. This is useful information if you are looking to combine these technologies in a useful way.
  • As can be expected given the very recent release of Juno, the fall 2014 release of OpenStack, a number of Juno-specific “how to install” pages are popping up. Here’s one such example. Much of the content is similar to previous “how to install” guides that I’ve seen, but it might be useful to a few folks out there.

Operating Systems/Applications

  • I’m seriously considering using the information in this article on an IRC proxy for myself. I find it enormously helpful to stay connected to various open source-related IRC channels, but staying logged while on the move is, for all intents and purposes, impossible. Perhaps the use of an IRC proxy can help. Anyone else out there using a setup like this?
  • Did you happen to notice that CoreOS is now available on Microsoft Azure? It makes me wonder when VMware will announce support for CoreOS on vCloud Air.

Storage

  • I’m not sure if this falls into storage or virtualization, but we’ll place it here in the Storage section. Eric Sloof (in conjunction with VMworld TV) has a video introducing readers to CloudVolumes, the relatively recent VMware acquisition that’s being put to work in the end-user computing space.
  • Greg Schulz has a decent two-part (well, three part actually, but it’s only the second two that interest me) series on VVols and storage I/O fundamentals (part 1 and part 2).

Virtualization

  • It will be nice when the virtualization industry converges on some common set of disk formats for virtual machines. OVF/OVA was an attempt, but there’s still some work to be done on that front. Until then, VM converters like version 3.0 of the Microsoft Virtual Machine Converter will keep popping up.
  • William Lam has a great guest post from Peter Bjork on a Mac Mini setup running VSAN.
  • A short while ago I gave you a quick introduction to Vagrant. One of the key components of Vagrant is the box, which is essentially a VM template. Cody Bunch recently published a post on using Packer to make Vagrant boxes, which might come in handy if you want to create your own Vagrant boxes.
  • Here’s a quick reminder from Ben Armstrong that if you want to use the Microsoft Virtual Machine Converter 3.0, you’ll need to be sure to unblock WMI in the Windows Firewall.
  • Want to install ESXi 5.5 Patch03 on the new Mac Pro? William Lam shows you how.
  • Installing CoreOS on vSphere got a bit easier, thanks to the inclusion of Open VM Tools in CoreOS Alpha 490.0.0 and a new script by William Lam. Here’s the details. (I guess William has been doing some super-useful stuff, since I keep referencing his links here. Keep up the great work, William!)
  • Gabrie van Zanten brings up a flaw between VMware Auto Deploy (in vSphere 5.1) and Microsoft Cluster Server (MSCS). What’s the flaw, you say? The two won’t work together.

I have more articles in my bucket labeled “Articles to blog about,” but I’ll save those for some other time in the interest of keeping this from getting overly long (which it probably already is). Until next time!

Tags: , , , , , , , , ,

For non-programmers, making a meaningful contribution to an open source project can be difficult; this is as true for OpenStack as for other open source projects. Documentation is a way to contribute, but in the case of OpenStack there is a non-trivial setup required in order to be able to contribute to the OpenStack documentation. In this post, I’m going to share how to set up the tools to contribute to OpenStack documentation in the hopes that it will help others get past the “barrier to entry” that currently exists.

I’ve long wanted to be more involved in supporting the OpenStack community, beyond my unofficial support via advocacy and blogging about OpenStack. I felt that documentation might be a way to achieve that goal. After all, I’ve written books and have been blogging for 9 years, so I should be able to add some value via documentation contributions. However, the toolchain that the OpenStack documentation uses requires a certain level of familiarity with development-focused tools, and the “how to” guides were less than ideal because of assumptions made regarding the knowledge level of new contributors. For these reasons, I felt that sharing how I (a non-programmer) set up the tools for contributing to OpenStack documentation might encourage other non-programmers to do the same, and thus get more people involved in the project.

This post is not intended to replace any official guides (like this one or this one), but rather to supplement such guides.

Using a Linux Environment

It’s no secret that I use OS X, and the toolchain that the OpenStack documentation team uses is—like the rest of OpenStack—pretty heavily biased toward Linux. One of the deterring factors for me was the level of difficulty to get these tools running on OS X. While it is possible to use the toolchain on OS X directly, it’s not necessarily simple or straightforward. It wasn’t until just recently that I realized: why not just use a Linux VM? Using a native Linux environment sidesteps all the messiness of trying to get a Linux-centric toolchain running on OS X (or Windows).

I initially started with a locally-hosted Linux VM on my MacBook Air, but after some consideration I decided to alter that approach and build my environment in a cloud VM running Ubuntu 12.04 on Amazon EC2 (via Ravello Systems).

The use of a cloud VM has some advantages and disadvantages:

  • Because the toolchain is heavily biased toward Linux, using a Linux-based cloud VM does simplify some things.
  • Cloud VMs will typically have much greater bandwidth, making it easier to install packages and pull down Git repositories. This was especially helpful this past week while I was traveling to the OpenStack Summit, where—due to slow conference wireless access—it took 2 hours to clone the GitHub repository for the OpenStack documentation (this is while I was trying to use a locally-hosted VM on my laptop).
  • It provides a clean separation between my local workstation and the development environment, meaning I can access my development environment from any system via SSH. This prevents me from having to keep multiple development environments in sync, which is something I would have had to do when working from my home office (where I use my MacBook Air as well as a Mac Pro).
  • The most significant drawback is that I cannot work on documentation patches when I have no network connectivity. For example, I’m writing this post as I’m traveling back from Paris. If I had a local development environment, I could assigned a few bugs to myself and worked on them while flying home. With a cloud VM-based environment, I cannot. I could have used a VM hosted on my local laptop (via VirtualBox or VMware Fusion), but that would have negated some of the other advantages.

In the end, you’ll have to evaluate for yourself what approach works best for you, your working style, and your existing obligations. However, the use of a Linux VM—local or via a cloud provider—can simplify the process of setting up the toolchain and getting started contributing, and it’s an approach I’d recommend for most anyone.

The information presented here assumes you are using an Ubuntu-based VM, either hosted with a cloud provider or on your local system. I’ll leave it to the readers to worry about the details of how to turn up the Ubuntu VM via their preferred mechanism.

Setting Up the Toolchain

Once you’ve established an Ubuntu Linux instance using the provider of your choice, use these steps to set up the documentation toolchain (note that, depending on the configuration of your Ubuntu Linux instance, you might need to preface these commands with sudo where appropriate):

  1. Update your system using apt-get update and apt-get -y upgrade. Depending on the updates that are installed, you might need to reboot.

  2. Generate an SSH keypair using ssh-keygen. Keep track of this keypair (and make note of the associated password, if you assigned one), as you’ll need it later. You can also re-use an existing keypair, if you’d like; I preferred to generate a keypair specifically for this purpose. If you are going to re-use an existing keypair, you’ll need to transfer them to this Linux instance via scp or your preferred mechanism.

  3. Install the necessary prerequisite packages (I’ve line wrapped the command below, denoted by a backslash, for readability):

    apt-get install maven git git-review python-pip libxml2-dev \
    libxslt1-dev python-dev gcc gettext zlib1g-dev
  4. Configure Git using git config to set your name and e-mail address. The e-mail address, in particular, should match what you’re using with the OpenStack Foundation. The commands to do this are git config --global
    user.name
    and git config --global user.email.

  5. Clone the GitHub repository where the documentation is found:

    git clone https://github.com/openstack/openstack-manuals.git
  6. Install the components required to run tests against the documentation sources (this command assumes that you cloned the repository into a subdirectory named “openstack-manuals”, which would be the default result of using the command above to clone the repository):

    pip install -r openstack-manuals/test-requirements.txt
  7. If you haven’t already established a Launchpad account, you’ll need one. Go create one (probably best done outside of the Ubuntu Linux environment) and make note of the username. Use your Launchpad account to verify that you can log in to both bugs.launchpad.net as well as review.openstack.org.

  8. While logged into review.openstack.org, go to the settings for your account and paste in the contents of the public key (not the private key) you generated earlier (or that you are re-using).

  9. Back in the Ubuntu Linux instance, set your Launchpad username:

    git config --global --add gitreview.username "<Launchpad username>"
  10. Change into the directory where you cloned the “openstack-manuals” repository and run git review -s. Assuming you already added the “gitreview.username” configuration parameter and that you’ve already uploaded the public key to your account on review.openstack.org, this should work without any issues. If you forgot to set “gitreview.username”, it should prompt you for your Launchpad username. If you forgot to upload the public SSH key, then it will probably error out.

  11. If git review -s completes without any obvious errors, you can double-check that a Gerrit upstream for the repository was added by using the command git remote -v. If you don’t see an upstream repo labeled gerrit, then something didn’t work. If you do see the upstream repo labeled gerrit, then you should be fine.

Once you’ve completed these steps, then you’re ready to start contributing to OpenStack documentation. I have a separate post planned that describes the process for actually contributing; stay tuned for that soon.

In the meantime, if anyone has any questions, corrections, or clarifications about the information in this post, please speak up in the comments below.

Tags: , , , , ,

This is part 18 of the Learning NSX blog series, in which I talk about using layer 3 (L3) routing with VMware NSX but without network address translation (NAT). This post describes a configuration that offers yet another connectivity option for OpenStack cloud administrators and operators.

In part 6, I showed you how to add a gateway appliance to your NSX installation. Part 9 leveraged the gateway appliances to create a L3 gateway service, which—as I explained in part 15—provides the functionality for logical routers in OpenStack. (Logical routing was covered in part 14.) Part 16 expanded the routing configuration to support multiple external networks. This post expands the options again by showing you how to do logical routing without using network address translation (NAT). Of course, it would probably be helpful to read the entire series; links to all posts can be found on the Learning NVP/NSX page.

As I mentioned, so far you’ve seen three different external connectivity options:

  • Routing (layer 3 connectivity) to a single external network
  • Routing (layer 3 connectivity) to multiple external networks using VLANs
  • Bridging (layer 2 connectivity) between a logical network and a physical broadcast domain

Both of the routed connectivity options you’ve seen so far have involved the use of NAT (specifically, source NAT, or SNAT) on the logical router in order to establish outbound connectivity. You’ve had to use floating IP addresses to establish inbound connectivity to instances. However, NSX is absolutely capable of supporting routed connectivity options that do not involve NAT, allowing cloud administrators and operators to use fully routable IP address spaces inside NSX logical networks.

Assuming that you already have logical routing working, the steps to establish a no-NAT routing configuration are pretty straightforward. Note that because the IP address space used in the logical network must interoperate with the IP addressing scheme used on the public network, this configuration is typically set up by a cloud administrator and not by a tenant.

To set up a no-NAT configuration, follow these steps (all commands are assumed to be run using admin credentials):

  1. Store the tenant’s ID in an environment variable, as you’ll be needing it quite a bit over the next few minutes. You can do that using keystone tenant-list and awk. For example, if the tenant’s name was blue, then you could use keystone tenant-list | awk '/\ blue\ / {print $2}' to find the tenant ID and assign it to an environment variable. The following commands will assume you’ve stored the tenant’s ID as $TID.

  2. Create a logical router and assign it to the tenant. The command to do this is neutron router-create --tenant-id $TID <Name of router>. It might be possible to re-use an existing logical router, but you’re really much better off just creating another logical router for this purpose.

  3. Create a logical network on behalf of the tenant, using neutron net-create --tenant-id $TID <Name of network>.

  4. Create a subnet and associate it with the new logical network you just created. You can do that with this command:

neutron subnet-create --tenant-id $TID --name <Subnet name> --gateway <IP address for logical router> <Name of network> <CIDR>
  1. Attach the logical router to the logical network and associated subnet with neutron router-interface-add <Router ID> <Subnet name>. The IP address of this interface will be the IP address provided to the --gateway parameter when you created the subnet.

  2. Attach the logical router to the external network that will serve as its uplink. This is where you’ll tell Neutron and NSX that NAT should not be active. The command looks like this:

neutron router-gateway-set --disable-snat <Router ID> <External network ID>

You’re almost done—you now have a logical network using IP addresses from a subnet that is routable within your larger network without any use of NAT. However, you still need to let the rest of the network know about this new subnet, and as NSX in multi-hypervisor environments doesn’t yet support dynamic routing via protocols like OSPF and BGP, then you’ll need to do static routing.

To do static routing, you’ll need the IP address assigned to the router during step 6 above. neutron port-list will show you the list of ports in Neutron; depending on the size of your installation, this list might be quite sizable. This command might help:

neutron port-list -c id -c fixed_ips -c device_owner | awk '/network:router_gateway/'

This will show you only the ports that are acting as router gateway ports. In large environments, even this list may not be short enough to know for sure which IP address is assigned to the router. For now, though, let’s assume that you’re able to determine the IP address assigned to the logical router on the external network.

Once you have this address, then it’s just a simple matter of adding that route to other network devices. Often, this is some form of ip route add; the specific commands and syntax will depend on the network device and are outside the scope of this article. Once you get the IP routing table(s) updated on the other network devices, then you’re ready to roll!

I hope you find this article (and this series) helpful. As always, if you have any questions, corrections, clarifications, or concerns, I invite you to speak up in the comments below. All courteous comments are welcome!

Tags: , , , , ,

This is part 17 of the Learning NSX blog series. In this post, I’ll show you how to add layer 2 (L2) connectivity to your NSX environment, and how to leverage that L2 connectivity in an NSX-powered OpenStack implementation. This will allow you, as an operator of an NSX-powered OpenStack cloud, to offer L2/bridged connectivity to your tenants as an additional option.

As you might expect, this post does build on content from previous posts in the series. Links to all the posts in the series are available on the Learning NVP/NSX page; in particular, this post will leverage content from part 6. Additionally, I’ll be discussing using NSX in the context of OpenStack, so reviewing part 11 and part 12 might also be helpful.

There are 4 basic steps to adding L2 connectivity to your NSX-powered OpenStack environment:

  1. Add at least one NSX gateway appliance to your NSX implementation. (Ideally, you would add two NSX gateway appliances for redundancy.)
  2. Create an NSX L2 gateway service.
  3. Configure OpenStack for L2 connectivity by configuring Neutron to use the L2 gateway service you just created.
  4. Add L2 connectivity to a Neutron logical network by attaching to the L2 gateway service.

Let’s take a look at each of these steps. (By the way, if the concept of “L2 connectivity” doesn’t make sense to you, please review part 1 of my “Introduction to Networking” series.)

Adding an NSX Gateway Appliance

I described the process for adding an NSX gateway appliance in part 6 of the series, so refer back to that article for details on how to add an NSX gateway appliance. The process for adding a gateway appliance is the same regardless of whether you’ll use that gateway appliance for L2 (bridged) or L3 (routed) connectivity.

A few things to note:

  • Generally, your gateway appliance will have at least three (3) network interfaces. One of those interfaces will be used for management traffic, one for transport (overlay) traffic, and one for external traffic. You’ll need to assign IP addresses to the management and transport interfaces, but the external interface does not require an IP address.
  • If you are going to use the gateway appliance to provide L2 connectivity to multiple VLANs, you’ll want to ensure that all appropriate VLANs are trunked to the external interface of the gateway appliances. If you are deploying redundant gateway appliances, make sure all the VLANs are trunked to all appliances.

Once you have the gateway appliance built and added to NSX using the instructions in part 6, you’re ready to proceed to the next step.

Creating an NSX L2 Gateway Service

After your gateway appliances (I’ll assume you’re using two appliances for redundancy) are built and added to NSX, you’re ready to create the L2 gateway service that will provide the L2 connectivity in and out of a NSX-backed logical network. This process is similar to the process described in part 9 of the series, which showed you how to add an L3 gateway service to NSX. (If you’re unclear on the difference between a gateway appliance and a gateway service, check out part 15 for a more detailed explanation.)

Before we walk through creating an L2 gateway service, keep in mind that you may connect either an L3 gateway service or an L2 gateway service to a single broadcast domain on the physical network. Let’s say you connect an L3 gateway service to VLAN 100 (perhaps using multiple VLANs as described in part 16). You can’t also connect an L2 gateway service to VLAN 100 as well; you’d need to use a different VLAN on the outside of the L2 gateway service. Be sure to take this fact into account in your designs.

To create an L2 gateway service, follow these steps from within NSX Manager:

  1. From the menu across the top of the NSX Manager page, select Network Components > Services > Gateway Services. This will take you to a page titled “Network Components Query Results,” where NSX Manager has precreated and executed a query for the list of gateway services. Your list may or may not be empty, depending on whether you’ve created other gateway services. Any gateway services that you’ve already created will be listed here.

  2. Click the Add button. This will open the Create Gateway Service dialog.

  3. Select “L2 Gateway Service” from the list. Other options in this list include “L3 Gateway Service” (you saw this in part 9) and “VTEP L2 Gateway Service” (to integrate a third-party top-of-rack [ToR] switch into NSX; you’ll use this in a future post). Click Next, or click on the “2. Basics” button on the left.

  4. Provide a display name for the new L2 gateway service, then click Next (or click on “3. Transport Nodes” on the left). You can optionally add tags here as well, in case you wanted to associate additional metadata with this logical object in NSX.

  5. On the Transport Nodes screen, click Add Gateway to select a gateway appliance (which is classified as a transport node within NSX; hypervisors are also transport nodes) to host this L2 gateway service.

  6. From the Edit Gateway dialog box that pops up, you’ll need to select a transport node and a device ID. The first option, the transport node, is pretty straightforward; this is a gateway appliance on which to host this gateway service. The device ID is the bridge (recall that NSX gateway appliances, by default, create OVS bridges to map to their interfaces) connected to the external network.

  7. Once you’ve added two (2) gateway appliances as transport nodes for your gateway service, click Save to create the gateway service and return to NSX Manager. You can create a gateway service with only a single gateway appliance, but it won’t be redundant and protected against the failure of the gateway appliance.

NSX is now ready to provide L2 (bridged) connectivity between NSX-backed logical networks and external networks connected to the gateway appliances in the L2 gateway service. Before we can leverage this option inside OpenStack, though, we’ll need to first configure OpenStack to recognize and use this new L2 gateway service.

Configure OpenStack for L2 Connectivity

Configuring OpenStack for L2 connectivity using NSX builds upon the specific details presented in part 12 of this series. I highly recommend reviewing that post if you haven’t already read it.

To configure OpenStack to recognize the L2 gateway service you just created, you’ll need to edit the configuration file for the NSX plugin on the Neutron server. In earlier versions of the plugin, this file was called nvp.ini and was found in the /etc/neutron/plugins/nicira directory. (In fact, this is the information I shared with you in part 12.) Newer versions of the plugin, however, use a configuration file named nsx.ini located in the /etc/neutron/plugins/vmware directory. I’ll assume you are using a newer version of the plugin.

Only a single change is needed to nsx.ini in order to configure OpenStack to recognize/use the new L2 gateway service. Simply add the UUID of the L2 gateway service (easily obtained via NSX Manager) to the nsx.ini file as the value for the default_l2_gw_service_uuid setting. (You followed a similar procedure in part 12 as part of the OpenStack integration, but for L3 connectivity that time.) Then restart the Neutron server, and you should be ready to go!

Neutron recognizes L2 gateway services as network gateways, so all the related Neutron commands use the term net-gateway. You can verify that the L2 gateway service is recognized by OpenStack Neutron by running the following command with admin permissions:

neutron net-gateway-list

You should see a single entry in the list, with a description that reads something like “default L2 gateway service” or similar. As long as you see that entry, you’re ready to proceed! If you don’t see that entry, it’s time to check in NSX Manager and/or double-check your typing.

Adding L2 Connectivity to a Neutron Logical Network

With the NSX gateway appliances installed, the L2 gateway service created, and OpenStack Neutron configured appropriately, you’re now in a position to add L2 connectivity to a Neutron logical network. However, there are a few limitations that you’ll want to consider:

  • A given Neutron logical network may be connected to either a logical router (hosted on gateway appliances that are part of an L3 gateway service) or a network gateway (an L2 gateway service), but not both. In other words, you can provide L3 (routed) or L2 (bridged) connectivity into and out of logical networks, but not both simultaneously.
  • Each Neutron logical network may be associated with exactly one broadcast domain on the physical network. Similarly, each broadcast domain on the physical network may be associated with exactly one Neutron logical network. For example, you can’t associate VLAN 100 with both logical network A as well as logical network B.
  • Finally, by default network gateway operations are restricted to users with administrative credentials only. There is a model whereby tenants can have their own network gateways, but for the purposes of this article we’ll assume the default model of provider-supplied gateways.

With these considerations in mind, let’s walk through what’s required to add L2 connectivity to a Neutron logical network.

  1. If you don’t already have a logical network, create one using the neutron net-create command. This can be done with standard tenant credentials.

  2. If you had to create the logical network, create a subnet as well using the neutron subnet-create command. You can leave DHCP enabled on this Neutron subnet, as the Neutron DHCP server (which is an instance of dnsmasq running in a network namespace on a Neutron network node) won’t provide addresses to systems on the physical network. However, the logical network and the physical network are going to be sharing an IP address space, so it would probably be a good idea to control the range of addresses using the --allocation-pool parameter when creating the subnet. As with creating the network, standard tenant credentials are all that are needed here.

  3. You’ll need to get the UUID of the network gateway, which you can do with this command: neutron net-gateway-list | awk '/\ default\ / {print $2}'. (You can also assign this to an environment variable for use later, if that helps you.) You’ll also need the the UUID of the logical network, which you can also store into an environment variable. This command and all subsequent commands require administrative credentials.

  4. Attach the logical network to the network gateway using the neutron net-gateway-connect command. Assuming that you’ve stored the UUID of the network gateway in $GWID and the UUID for the logical network in $NID, then the command you’d use would be neutron net-gateway-connect $GWID $NID --segmentation_type=flat. This command must be done by someone with administrative credentials.

  5. If you are using multiple VLANs on the outside of the network gateway, then you’d replace --segmentation_type=flat with --segmentation_type=vlan and adding another parameter, --segmentation_id= and the appropriate VLAN ID. For example, if you wanted to bridge the logical network to VLAN 200, then you’d use segmentation_type=vlan and segmentation_id=200.

  6. That’s it! You now have your Neutron logical network bridged out to a broadcast domain on the physical network.

If you need to change the mapping between a broadcast domain on the physical network and a Neutron logical network, simply use neutron net-gateway-disconnect to disconnect the existing logical network, and then use neutron net-gateway-connect to connect a different logical network to the physical network segment.

I hope you’ve found this post to be useful. The use of L2 gateways offers administrators and operators a new option for network connectivity for tenants in addition to L3 routing. I’ll explore additional options for network connectivity in future posts, so stay tuned. In the meantime, feel free to share any comments, thoughts, or corrections in the comments below.

Tags: , , , , ,

This is part 16 of the Learning NSX series, in which I will show you how to configure VMware NSX to route to multiple external VLANs. This configuration will allow you to have logical routers that could be uplinked to any of the external VLANs, providing additional flexibility for consumers of NSX logical networks.

Naturally, this post builds on all the previous entries in this series, so I encourage you to visit the Learning NVP/NSX page for links to previous posts. Because I’ll specifically be discussing NSX gateways and routing, there are some posts that are more applicable than others; specifically, I strongly recommend reviewing part 6, part 9, part 14, and part 15. Additionally, I’ll assume you’re using VMware NSX with OpenStack, so reviewing part 11 and part 12 might also be helpful.

Ready? Let’s start with a very quick review.

Review of NSX Gateway Connectivity

You may recall from part 6 that the NSX gateway appliance is the piece of VMware NSX that handles traffic into or out of logical networks. As such, the NSX gateway appliance is something of a “three-legged” appliance:

  • One “leg” (network interface) provides management connectivity among the gateway appliance and the nodes in the NSX controller cluster
  • One “leg” provides connectivity to the transport network, which carries the encapsulated logical network traffic
  • One “leg” is the uplink and provides connectivity to physical networks

That’s the physical architecture. From a more logical architecture, you may recall from part 15 that NSX gateway appliances are combined into an NSX gateway service, and the NSX gateway service hosts one or more logical routers. Neither the NSX gateway appliance nor the NSX gateway service are visible to the consumers of the environment; they are only visible to the operators and/or administrators. Consumers only see logical routers, which also serve as the default gateway/default route/IP gateway to/from their logical networks.

The configurations I’ve shown you/discussed so far have assumed the presence of only a single uplink. NSX is not constrained to having only a single uplink, nor is it constrained to having only a single physical network on an uplink. If you need multiple networks on the outside of an NSX gateway appliance, you can either use multiple uplinks, or you can use multiple VLANs on an uplink. In this post I’ll show you how to use multiple VLANs on the outside. This diagram provides a graphical representation of what the configuration will look like.

Multiple VLANs with NSX Gateways

(Click here for a larger version.)

Setting up this configuration will involve three steps:

  1. Configuring the uplink to carry multiple VLANs.
  2. Verifying the gateway configuration.
  3. Setting up the external networks in OpenStack.

Let’s take a look at each of these sections.

The process for this step will vary, mostly because it involves configuring your physical network to pass the appropriate VLANs to the NSX gateway appliance. I’ve written a few articles in the past that might be helpful here:

Although the titles of some of these articles seem to imply they are VMware-specific, they aren’t—the physical switch configuration is absolutely applicable here.

Verifying the Gateway Configuration

No special configuration is required on the NSX gateway appliance. As you probably already know, the NSX gateway appliance leverages Open vSwitch (OVS). OVS ports are, by default, trunk ports, and therefore will carry the VLAN tags passed by a properly configured physical switch. Further, the OVS bridge for the external uplink (typically breth1 or breth2) doesn’t need an IP address assigned to it. This is because the IP address(es) for logical routing are assigned to the logical routers, not the NSX gateway appliance’s interface. If you do have IP addresses assigned to the external uplink interface, you can safely remove it. If you prefer to leave it, that’s fine too.

As a side note, the NSX gateway appliances do support configuring VLAN sub-interfaces using a command like this:

add network interface <physical interface> vlan <VLAN ID>

Thus far, I haven’t found a need to use VLAN sub-interfaces when using multiple VLANs on the outside of an NSX gateway appliance, but I did want to point out that this functionality does indeed exist.

Setting up the External Networks

This is the only moderately tricky part of the configuration. In this step, you’ll prepare multiple external networks that can be used as uplinks for logical routers.

The command you’ll want to use (yes, you have to use the CLI—this functionality isn’t exposed in the OpenStack Dashboard web interface) looks like this:

neutron net-create <network name> -- 
--router:external=True --provider:network_type l3_ext
--provider:segmentation_id <VLAN ID> --provider:physical_network=<NSX gateway service UUID> --shared=True

For the most part, this command is pretty straightforward, but let’s break it down nevertheless:

  • The router:external=True tells Neutron this network can be used as the external (uplink) connection on a logical router.
  • The provider:network_type l3_ext is an NSX-specific extension that enables Neutron to work with the layer 3 (routing) functionality of the NSX gateway appliances.
  • The provider:segmentation_id portion provides the VLAN ID that should be associated with this particular external network. This VLAN ID should be one of the VLAN IDs that is trunked across the connection from the physical switch to the NSX gateway appliance.
  • The provider:physical_network portion tells OpenStack which specific NSX gateway service to use. This is important to note: this command references an NSX gateway service, not an NSX gateway appliance. Refer to part 15 if you’re unclear on the difference.

You’d repeat this command for each external network (VLAN) you want connected to NSX and usable inside OpenStack.

For each Neutron network, you’ll also need a Neutron subnet. The command to create a subnet on one of these external networks looks like this:

neutron subnet-create <network name> <CIDR>
--name <subnet name> --enable_dhcp=False
--allocation-pool start=<starting IP address>,end=<ending IP address>

The range of IP addresses specified in the allocation_pool portion of the command becomes the range of addresses from this particular subnet that can be assigned as floating IPs. It is also the pool of addresses from which logical routers will pull an address when they are connected to this particular external network.

When you’re done creating an external network and subnet for each VLAN on the outside of the NSX gateway appliance, then your users (consumers) can simply create logical routers as usual, and then select from one of the external networks as an uplink for their logical routers. This assumes you included the shared=True portion of the command when creating the network; if desired, you can omit that and instead specify a tenant ID, which would assign the external network to a specific tenant only.

I hope you find this post to be useful. If you have any questions, corrections, or clarifications, please speak up in the comments. All courteous comments are welcome!

Tags: , , , , , ,

Welcome to Technology Short Take #45. As usual, I’ve gathered a collection of links to various articles pertaining to data center-related technologies for your enjoyment. Here’s hoping you find something useful!

Networking

  • Cormac Hogan has a list of a few useful NSX troubleshooting tips.
  • If you’re not really a networking pro and need a “gentle” introduction to VXLAN, this post might be a good place to start.
  • Also along those lines—perhaps you’re a VMware administrator who wants to branch into networking with NSX, or you’re a networking guru who needs to learn more about how this NSX stuff works. vBrownBag has been running a VCP-NV series covering various objectives from the VCP-NV exam. Check them out—objective 1, objective 2, objective 3, and objective 4 have been posted so far.

Servers/Hardware

  • I’m going to go out on a limb and make a prediction: In a few years time (let’s say 3–5 years), Intel SGX (Software Guard Extensions) will be regarded as important if not more important than the virtualization extensions. What is Intel SGX, you ask? See here, here, and here for a breakdown of the SGX design objectives. Let’s be real—the ability for an application to protect itself (and its data) from rogue software (including a compromised or untrusted operating system) is huge.

Security

  • CloudFlare (disclaimer: I am a CloudFlare customer) recently announced Keyless SSL, a technique for allowing organizations to take advantage of SSL offloading without relinquishing control of private keys. CloudFlare followed that announcement with a nitty gritty technical details post that describes how it works. I’d recommend reading the technical post just to get a good education on how encryption and TLS work, even if you’re not a CloudFlare customer.

Cloud Computing/Cloud Management

  • William Lam spent some time working with some “new age” container cluster management tools (specifically, govmomi, govc CLI, and Kubernetes on vSphere) and documented his experience here and here. Excellent stuff!
  • YAKA (Yet Another Kubernetes Article), this time looking at Kubernetes on CoreOS on OpenStack. (How’s that for buzzword bingo?)
  • This analytical evaluation of Kubernetes might be helpful as well.
  • Stampede.io looks interesting; I got a chance to see it live at the recent DigitalOcean-CoreOS meetup in San Francisco. Here’s the Stampede.io announcement post.

Operating Systems/Applications

  • Trying to wrap your head around the concept of “microservices”? Here’s a write-up that attempts to provide an introduction to microservices. An earlier blog post on cloud native software is pretty good, too.
  • Here’s a very nice collection of links about Docker, ranging from how to use Docker to how to use the Docker API and how to containerize your application (just to name a few topics).
  • Here’a a great pair of articles (part 1 and part 2) on microservices and Platform-as-a-Service (PaaS). This is really good stuff, especially if you are trying to expand your boundaries learning about cloud application design patterns.
  • This article by CenturyLink Labs—which has been doing some nice stuff around Docker and containers—talks about how to containerize your legacy applications.
  • Here’s a decent write-up on comparing LXC and Docker. There are also some decent LXC-specific articles on the site as well (see the sidebar).
  • Service registration (and discovery) in a micro-service architecture can be challenging. Jeff Lindsay is attempting to help address some of the challenges with Registrator; more information is available here.
  • Unlike a lot of Docker-related blog posts, this post by RightScale on combining VMs and containers for better cloud portability is a well-written piece. The pros and cons of using containers are discussed fairly, without hype.
  • Single-process containers or multi-process containers? This site presents a convincing argument for multi-process containers; have a look.
  • Tired of hearing about containers yet? Oh, come on, you know you love them! You love them so much you want to run them on your OS X laptop. Well…read this post for all the gory details.

Storage

  • The storage aspect of Docker isn’t typically discussed in a lot of detail, other than perhaps focusing on the need for persistent storage via Docker volumes. However, this article from Red Hat does a great job (in my opinion) of exploring storage options for Docker containers and how these options affect performance and scalability. Looks like OverlayFS is the clear winner; it will be great when OverlayFS is in the upstream kernel and supported by Docker. (Oh, and if you’re interested in more details on the default device mapper backend, see here.)
  • This is a nice write-up on Riverbed SteelFusion, aka “Granite.”

Virtualization

  • Azure Site Recovery (ASR) is similar to vCloud Air’s Disaster Recovery service, though obviously tailored toward Hyper-V and Windows Server (which is perfectly fine for organizations that are using Hyper-V and Windows Server). To help with the setup of ASR, the Azure team has a write-up on the networking infrastructure setup for Microsoft Azure as a DR site.
  • PowerCLI in the vSphere Web Client, eh? Interesting. See Alan Renouf’s post for full details.
  • PernixData recently released version 2.0 of FVP; Frank Denneman has all the details here.

That’s it for this time, but be sure to visit again for future episodes. Until then, feel free to start (or join in) a discussion in the comments below. All courteous comments are welcome!

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #44, the latest in my irregularly-published series of articles, links, ideas, and thoughts about various data center-related technologies. Enjoy!

Networking

  • One of the original problems with the VXLAN IETF specification was that it (deliberately) didn’t include any control plane information; as a result, the process of mapping MAC addresses to VTEPs (VXLAN Tunnel Endpoints) wasn’t defined, and the early implementations relied on multicast to handle this issue. To help resolve this issue, Cumulus Networks (and possibly Metacloud, I’m not sure of their involvement yet) has release an open source project called vxfld. As described in this Metacloud blog post, vxfld is designed to “handle VXLAN traffic from any operationg system or hardware platform that adheres to the IETF Internet-Draft for VXLAN”.
  • Nir Yechiel recently posted part 1 of a discussion on the need for network overlays. This first post is more of a discussion of why VLANs and VLAN-based derivatives aren’t sufficient, and why we should be looking to routing (layer 3) constructs instead. I’m looking forward to part 2 of the series.
  • One ongoing discussion in the network industry these days—or so it seems—is the discussion about the interaction between network overlays and the underlying transport network. Some argue that tight integration is required; others point to streaming video services and VoIP running across the Internet and insist that no integration or interaction is needed. In this post, Scott Jensen argues in favor of the former—that SDN solutions shouldn’t just manage network overlays, but should also manage the configuration of the physical transport network as well. I’d love to hear from more networking pros (please disclose company affiliations) about their thoughts on this matter.
  • I like the distinction made here between network automation and SDN.
  • Need to get a better grasp on OpenFlow? Check out OpenFlow basics and OpenFlow deep-dive.
  • Here’s a write-up on connecting Docker containers using VXLAN. I think there’s a great deal of promise for OVS in containerized environments, but what’s needed is better/tighter integration between OVS and container solutions like Docker.

Servers/Hardware

  • Is Intel having second thoughts about software-defined infrastructure? That’s the core question in this blog post, which explores the future of Intel in a software-defined world and the increasing interest in non-x86 platforms like ARM.
  • On the flip side, proponents who claim that platforms like ARM and others are necessary in order to move forward with SDN and NFV initiatives should probably read this article on 80 Gbps performance from an off-the-shelf x86 server. Impressive.

Security

  • It’s nice to see that work on OpenStack Barbican is progressing nicely; see this article for a quick overview of the project and an update on the status.

Cloud Computing/Cloud Management

  • SDN Central has a nice write-up on the need for open efforts in the policy space, which includes the Congress project.
  • The use of public cloud offerings as disaster recovery targets is on the rise; note this article from Microsoft on how to migrate on-premises workloads to Azure using Azure Site Recovery. VMware has a similar offering via the VMware vCloud Hybrid Service recovery-as-a-service offering.
  • The folks at eNovance have a write-up on multi-tenant Docker with OpenStack Heat. It’s an interesting write-up, but not for the faint of heart—to make their example work, you’ll need the latest builds of Heat and the Docker plugin (it doesn’t work with the stable branch of Heat).
  • Preston Bannister took a look at cloud application backup in OpenStack. His observations are, I think, rational and fair, and I’m glad to see someone paying attention to this topic (which, thus far, I think has been somewhat ignored).
  • Interested in Docker and Kubernetes on Azure? See here and here for more details.
  • This article takes a look at Heat-Translator, an effort designed to provide some interoperability between TOSCA and OpenStack HOT documents for application deployment and orchestration. The portability of orchestration resources is one of several aspects you’ll want to examine as you progress down the route of fully embracing a cloud computing operational model.

Operating Systems/Applications

  • Looks like we have another convert to Markdown—Anthony Burke recently talked about how he uses Markdown. Regular readers of this site know that I do almost all of my content generation using MultiMarkdown (a variation of Markdown with some expanded syntax options). Here’s a post I recently published on some useful Markdown tools for OS X.
  • Good to see that Ivan Pepelnjak thinks infrastructure as code makes sense. I guess that means the time I’ve spent with Puppet (you can browse Puppet-related posts here) wasn’t a waste.
  • I don’t know if I’ve mentioned this before (sorry if that’s the case), but I’m liking this “NIX4NetEng” series going on over at Nick Buraglio’s site (part 1, part 2, and part 3).
  • Mike Foley has a blog post on how to go from zero to Windows domain controller in only 4 reboots. Handy.

Storage

Virtualization

  • Running Hyper-V with Linux VMs? Ben Armstrong details what versions of Linux support the various Hyper-V features in this post.
  • Here’s a quick write-up on running VMs with VirtualBox 4.3 on a headless Ubuntu 14.04 LTS server.
  • Nested OS X guest on top of nested ESXi on top of VMware Fusion? Must be something William Lam’s tried. Go have a look at his write-up.
  • Here’s a quick update on Nova-Docker, the effort in OpenStack to allow users to deploy Docker containers via Nova. I’m not yet convinced that treating Docker as a hypervisor in Nova is the right path, but we’ll see how things develop.
  • This post is a nice write-up on the different ways to connect a Docker container to a local network.
  • Weren’t able to attend VMworld US in San Francisco last week? No worries. If you have access to the recorded VMworld sessions, check out Jason Boche’s list of the top 10 sessions for a priority list of what recordings to check out. Or need a recap of the week? See here (one of many recap posts, I’m sure).

That’s it this time around; hopefully I was able to include something useful for you. As always, all courteous comments are welcome, so feel free to speak up in the comments. In particular, if there is a technology area that I’m not covering (or not covering well), please let me know—and suggestions for more content sources are certainly welcome!

Tags: , , , , , , , , , , , , ,

In this post, I’ll share a simple template for deploying Docker containers in an OpenStack environment using Heat. Given that Docker is targeted at application deployment, then I felt that using Heat was a more appropriate way of leveraging Docker in an OpenStack environment as opposed to treating Docker as a form of a hypervisor. Later in this post, I’ll compare this approach to using a more container-aware solution such as fleet.

I assume you’re already familiar with OpenStack Heat and Docker. If you aren’t, take a look at these articles first:

Prerequisites

Before you can actually use Heat to orchestrate Docker containers, there are some prerequisites you’ll need to have done first:

  1. You’ll need to have the Docker plugin for Heat installed. This can be tricky; see here for some instructions that worked for me. To verify that the Docker plugin is working as expected, run heat resource-type-list and check the output for “DockerInc::Docker::Container”. If that resource type is included in the output, then the Docker plugin is working as expected.
  2. Any Docker hosts you’re running must have Docker configured to listen on a network-accessible socket. I was running CoreOS in my environment, so I followed the instructions to make Docker on CoreOS listen on a TCP socket. (In case the link doesn’t take you to the right section, see the section titled “Enable the Remote API on a New Socket.”) In my case, I selected TCP port 2345. Make note of whatever port you select, as you’ll need it in your template.
  3. Any Docker hosts that will be orchestrated by Heat must have an IP address assigned that is reachable from the server where Heat is running (typically the cloud controller). In my case, I use Neutron with NSX, so I had to assign floating IPs to the instances with which Heat would be communicating.
  4. You’ll need to be sure that the TCP port you select for Docker to use (I used TCP port 2345) is accessible, so modify any security groups assigned to the instances to allow inbound TCP traffic on that port from the appropriate sources.

Once these prerequisites are addressed—Docker plugin installed and working, Docker listening on a TCP port, instance reachable from cloud controller on selected TCP port—then you’re ready to go.

Template for Docker Orchestration

Here is a sample template that will create a Docker container on an existing instance:

(Click here if you don’t see the code block above.)

As I said, this is pretty simple. The image property is the name of the Docker image you want to use; in this case, I’m using an image containing the popular Nginx web server. The docker_endpoint property should be a URL that specifies the protocol (TCP), IP address (in my case, a floating IP address assigned to the instance), and the port number on which the Docker daemon is listening. Note that the format for this property isn’t documented anywhere I’ve found.

In the “stable/icehouse” branch of the Docker plugin (required if you’re using distro packages for your OpenStack installation, as I am), there are some additional properties available as well. Unfortunately, without any documentation on what these properties should look like, I was unable to make it work with any of those properties included. In particular, the port_specs property, which controls how ports in a Docker container are exposed to the outside world, would have been very useful and applicable. However, I was unable to make it work with the port_specs attribute included. If anyone has information on the exact syntax and format for the port_specs property in the “stable/icehouse” branch of the plugin, please speak up in the comments.

Naturally, you could embed this portion of YAML code into a larger HOT-formatted template that also launched instances, created Neutron networks, attached the instances to Neutron networks, created a logical router, and mapped a floating IP address to the instance. I leave the creation of such a template as an exercise for the reader, but I will point out that I’ve already shared with you almost all the pieces necessary to do exactly that. (See the blog posts I provided earlier.)

Summary

I mentioned at the start of this post that I’d provide some comparison to other methods for deploying containers in an automated fashion. With that in mind, here are a few points you’ll want to consider:

  • There is no container scheduling in this solution. Containers are statically mapped to a container host (the VM instance, in this case, although this could be a bare metal host running Docker as well). Other solutions, like fleet, at least let you just point to a cluster of systems instead of a specific system. (See this write-up on fleet for more information.)
  • Docker must be listening on a TCP socket. This isn’t Docker’s default configuration, so this is an additional change that must be incorporated into the environment. Fleet doesn’t have this requirement, although other solutions such as Mesos might (I haven’t tested any other solutions—yet.)
  • There is very little documentation available right now. Note that this may be true for other solutions as well (this entire space is relatively new and growing/evolving rapidly). Regardless, until someone can at least figure out how to expose Docker containers to the network via a Heat template, this isn’t very useful.

My initial assessment is that OpenStack needs container scheduling, not static assignment, in order for Docker integration into OpenStack to be truly useful. Proponents of the Nova-Docker approach (treating Docker as a hypervisor and Docker images as Glance images) point to their approach as superior because of the integration of Nova’s scheduling functionality. It will be interesting to see how things develop on this front.

If you have any questions, have more information to share, or have corrections or clarifications to any of the information presented here, please speak up in the comments.

Tags: , , ,

In this post, I’m going to illustrate one way to deploy CoreOS on OpenStack using Heat. By no means is this intended to be seen as the only way to use Heat to deploy CoreOS, but rather as one way of using Heat to deploy CoreOS. I’m publishing this in the hopes that others will be able to use this as a building block for their own deployments.

If you aren’t already familiar with OpenStack Heat or CoreOS, you might want to take a moment and refer to this introductory posts for some foundational information:

Moving forward, OpenStack Heat is trying to standardize on OpenStack resource types (like OS::Nova::Server) and the HOT format (using YAML). Therefore, the Heat template I’m presenting here will use OpenStack resource types and YAML. Note that it’s certainly possible to do this using CloudFormation (CFN) resource types and JSON formatting. I’ll leave the conversion of the template found here into CFN/JSON as an exercise for the readers.

Here’s the example Heat template you can use to deploy and customize CoreOS on OpenStack:

(Click here if you can’t see the code block above.)

Let’s walk through this template real quick:

  • On line 9, you’ll need to provide the ID for the Neutron network to which the new CoreOS instance(s) should connect. You can get this a couple of different ways; running neutron net-list is one way.
  • On line 14, you’ll need to supply the ID for the CoreOS image you’ve uploaded into Glance. Again, there are multiple ways to obtain this; running glance image-list is one way of getting that information.
  • On line 22, replace the text (including the “<” and “>” symbols) with the ID of the security group you want applied to the CoreOS instance(s) being deployed. The neutron security-group-list command can give you the information you need to put here.
  • On line 31, supply the name of the SSH key you want to inject into the instance(s).
  • On line 37, you’ll need to generate a unique cluster ID to place here for the configuration of etcd within the CoreOS instance(s). You can generate a new ID (also called a token) by visiting https://discovery.etcd.io/new. That will return another URL that contains the new etcd cluster token. Supply that token here to create a new etcd cluster out of the CoreOS instance(s) you’re deploying with this template.
  • This template only deploys a single CoreOS instance. To deploy multiple CoreOS instances, you’ll need a separate OS::Neutron::Port and OS::Nova::Server resource for each instance. For each Neutron port, you can reference the same security group ID and network ID. For each instance, you can reference the same Glance image ID, same SSH key, and same etcd cluster token; the only thing that would change with each instance is line 30. Line 30 should point to a unique Neutron port resource created for each instance (something like instance1_port0, instance2_port0, etc.).

Now, there are obviously lots of other things you could do here—you could create your own Neutron network to host these CoreOS instances, you could create a logical router to provide external connectivity (which is required, by the way, in order for the etcd cluster token discovery to work correctly), and you could create and assign floating IPs to the instances. Examples of some of these tasks are in the articles I provided earlier; others are left as an exercise for the reader. (Or I’ll write up something later. We’ll see.)

Once you have your template, you can deploy the stack using Heat, and then—after your CoreOS cluster is up and running—begin to deploy applications to the cluster using tools like fleet. That, my friends, is another story for another day.

Any questions? Corrections? Clarifications? Feel free to start (or join) the discussion below. All courteous comments are welcome.

Tags: , , ,

A new startup emerged from stealth today, a company called Platform9. Platform9 was launched by former VMware veterans with the goal of making it easy for companies to consume their existing infrastructure in an agile, cloud-like fashion. Platform9 seeks to accomplish this by offering a cloud management platform that is itself provided as a cloud-based service—hence the name of this post, “cloud-hosted cloud management.”

It’s an interesting approach, and it certainly helps eliminate some of the complexity that organizations face when implementing their own cloud management platform. For now, at least, that is especially true for OpenStack, which can be notoriously difficult for newcomers to the popular open source cloud management environment. By Platform9 offering an OpenStack API-compatible service, organizations that want a more “public cloud-like” experience can get it without all the added hassle.

The announcements for Platform9 talk about support for KVM, vSphere, and Docker, though the product will only GA with KVM support (support for vSphere and Docker are on the roadmap). Networking support is also limited; in the initial release, Platform9 will look for Linux bridges with matching names in order to stitch together networks. However, customers will get an easy, non-disruptive setup with a nice set of dashboards to help show how their capacity is being utilized and allocated.

It will be interesting to see how things progress for Platform9. The idea of providing cloud management via an SaaS model (makes me think of “cloud inception”) is an interesting one that does sidestep many adoption hurdles, though questions of security, privacy, confidentiality, etc., may still hinder adoption in some environments.

Thoughts on Platform9? Feel free to speak up in the comments below. All courteous comments are welcome!

Tags: , , ,

« Older entries