This is a semi-liveblog of the day 2 keynote at VMworld 2014 in San Francisco.

Ben Fathi, CTO of VMware, takes the stage to start the keynote, and goes through a recap of yesterday’s announcements: EVO and EVO:RAIL, vCloud Air rebranding and the availability of new services, VMware Integrated OpenStack (VIO), and new product versions (vCloud Suite 5.8, betas of vSphere and VSAN).

Fathi re-emphasizes that VMware’s SDDC vision is an architecture, one built on the “power of AND.” vCloud Suite (and vCloud Air) is a manifestation of the SDDC vision/architecture. VMware’s SDDC vision/architecture delivers the power of “AND.”

Fathi transfers the baton to Sanjay Poonen, EVP for End-User Computing at VMware. Poonen is a high-energy speaker, and his enthusiasm for end-user computing is clear (and contagious). The world of end-user computing is changing; not only users are changing, but also apps and data are changing. Poonen talks about VMware’s EUC vision: enabling users to work “at the speed of life.” Today’s environments are far more heterogenous than in the past. Poonen uses an example of a rooftop infinity pool in Singapore as a metaphor for the “three foundations” of EUC: SDDC, workspace services (authentication and content collaboration, for example), and then the desktop/mobile/cloud experience.

Poonen goes on to talk about VMware’s desktop leadership. Horizon delivers a unified application publishing and VDI solution. Next is Desktop-as-a-Service (DaaS), via Desktone, and Poonen mentions DaaS via vCloud Air. Next Poonen talks about real-time application delivery via the CloudVolumes acquisition. Finally, Poonen discusses the rich user experience, and he announces a partnership between Google, NVIDIA, and VMware, and rolls into a video about the results of the partnership. The partnership enables users of VMware Horizon View on VMware vSphere on next-generation Google Chromebooks to utilize massively graphics-intensive applications with outstanding performance. (This is a pretty cool announcement, IMHO.)

The second major pillar that Poonen discusses is enterprise mobility management. It’s not just about application management, but also about content management, email management, and device management. Poonen brings onto stage Kevin Ichhpurani from SAP, to discuss a partnership between VMware and SAP. The partnership brings together AirWatch and SAP Secure Mobile for the secure delivery of applications to mobile devices.

The third major pillar that Poonen discusses is content collaboration. This means “anytime/anywhere” access, hybrid deployment, enterprise-grade functionality/support. Poonen talks about how United Airlines is transforming their operations uses mobile devices (supported by Apple devices and AirWatch by VMware). Poonen talks about all the integrations that are present: integrations between SDDC and workspace services, integration between desktop services and content collaboration. Poonen revisits the VMware Workspace Suite announcement from yesterday, drawing a comparison between vCloud Suite for SDDC and the Workspace Suite for EUC.

Poonen mentions that VMware is being designated as the leader by GigaOm and Radicati Group.

At this point, Poonen brings out Kit Colbert, CTO for EUC at VMware. Colbert is well-known to the VMworld crowd and is a veteran at VMware. Colbert starts his discussion using the idea of a “mobile cloud architecture” as the basis for the discussion. Colbert starts the first demo of the day by framing the demo around the concept of a doctor moving through various devices and applications over the course of a day. The demo spans a Mac in the doctor’s office, an iPad that he carries with him, and a thin client leveraging rich 3D graphics functionality. AirWatch Secure Content Locker facilitates some fictional collaboration between Colbert and Poonen (who is also a doctor in this example) across different security zones. The summary of the demo is that all the applications were in one place, data followed the user, and 3D images were available on any device.

Next Colbert takes the audience into a demo of CloudVolumes, and how CloudVolumes makes application delivery fast and seamless (like provisioning applications via AirWatch to a mobile device). In the demo, Colbert provisions Office to a VDI desktop almost instantaneously.

Application delivery is important, but desktop delivery and desktop provisioning is also important. Colbert next dives into more detail on Project Fargo, which enables vastly faster provisioning of virtual desktops by via cloning process. The clone is very lightweight because it shares not only disk but also memory with the original. Colbert begins to tie things today: VMware uses Project Fargo to quickly clone the desktop, and then use CloudVolumes to attach applications to that desktop. When the desktop is no longer needed, the desktop is discarded. This radically simplifies the administration and management of desktops and applications. Colbert name-drops a new project name—Project Meteor—but doesn’t go into great level of detail on exactly what Meteor encompasses.

Poonen and Colbert turn the stage over to Raghu Raghuram, who heads up VMware’s SDDC efforts. Raghuram reviews the achievements of the core components of the SDDC architecture—compute (via vSphere, including the vSphere 6.0 beta announced yesterday), storage (via VSAN), networking (via NSX—go VCDX-NV!), and management (via the newly-rebranded vRealize suite). Once again, Raghuram talks about SDDC as “the power of AND”.

Raghuram next transitions into a review of the “three paths” to SDDC that were introduced by Gelsinger yesterday: build your own, converged infrastructure, and hyper-converged infrastructure. Naturally, this leads to a discussion of the EVO family, especially EVO:RAIL. Fathi takes the stage again and prompts Raghuram into discussing how quickly EVO:RAIL can be turned up (up and running in 15 minutes, scales up to 100 VMs or 250 desktop VMs, supports non-disruptive upgrades), and that leads into a demo of EVO:RAIL. The hardware specification for EVO:RAIL is a 2U, four-node enclosure (think Dell C6100 or similar hardware form factor). EVO:RAIL building blocks will automatically discover each other, allowing users to easily build 16 node clusters (four EVO:RAIL blocks). EVO:RAIL also offers a greatly simplified web-based UI, although the vSphere Web Client is still available for use. EVO:RAIL includes vSphere and VSAN, and includes Log Insight as well.

EVO:RACK is a larger-scale implementation that builds on EVO:RAIL and includes all of the vCloud Suite. It starts at half a rack, and scales to multiple racks (no mention I caught of exactly how many racks). VMware is shooting for being up and running deploying applications in less than 2 hours for EVO:RACK. EVO:RACK, of course, is still in technical preview.

Next Raghuram moves onto talking about VMware Integrated OpenStack (VIO), which was announced yesterday as being available in beta. Over the next few minutes he discusses VMware’s involvement in OpenStack, and reviews why running OpenStack on VMware is the best choice for deploying OpenStack today. It doesn’t hurt that VMware offers a single point of contact for deploying OpenStack on VMware using VIO, although VMware continues to partner with companies like Canonical, HP, SuSE, and Mirantis for OpenStack deployments as well. (HP announced yesterday that HP Helion will support vSphere and NSX.)

From there, Raghuram transitions to a discussion of traditional applications and next-generation applications. This leads to mention of support for 4 vCPUs with VMware Fault Tolerance in the vSphere 6 beta. (This expansion of support is a long time coming.) Cross-vCenter vMotion is also coming in vSphere 6, as well as long-distance vMotion (coast-to-coast vMotion mentioned as an example). Raghuram transitions to Fathi for a discussion of next-generation applications.

Fathi talks about the use of microservices as the basis for next-generation applications, which clearly indicates he will be talking about containers. Fathi paints containers as “having been around for years,” including pointing to a feature in Windows Data Center Server 15 years ago. However, Docker has clearly captured the attention of developers, and so Fathi talks about where Docker fits. This leads into the “containers without compromise” message, which means that running containers on VMs on VMware’s SDDC architecture is the best way to deliver containerized applications and microservices today. Fathi mentions VMware working closely with Docker and Kubernetes, and explains the concept of a Kubernetes pod as a collection of containers. The value that VMware offers in a Docker/Kubernetes environment includes persistent storage via VSAN, pod mobility via vMotion/DRS, enhanced security via the VM operational model, and the rich ecosystem of tools supporting VMware virtualized infrastructure. Fathi re-iterates that SDDC is a single platform for running traditional applications as well as next-generation applications, and reminds attendees that CloudFoundry has supported containers (via Warden) for a number of years. (In other words, containers are not new to VMware.) VMware announces today that VMware is working with Docker, Google, and Pivotal to make containers a “first-class” citizen in the VMware SDDC architecture.

This leads into a demo by Fathi showing the use of vCAC (vRealize Automation?) deploying a blueprint. The demo shows CoreOS and Kubernetes deploying Vulcan, which is a service registration/service discovery tool. Raghuram asks about Project Fargo, and Fathi talks about how Fargo can help make containers even better than on bare metal through the use of technologies like those in Project Fargo.

Next up, Raghuram talks about the vRealize suite of management applications and tools. vRealize will be available as an on-premises implementation as well as available via vCloud Air in a SaaS model. I think that the value of this second approach is possibly being overlooked by a good number of conference attendees, but it is quite a significant announcement in my humble opinion.

Fathi takes over again to discuss the need for policy in the SDDC architecture. Policy can (and should) be applied not only to networking, but also to storage, compute, and other areas. This leads into a demo, hosted within vCAC (vRealize Automation?), and shows how various policies like storage policies are implemented to affect how applications are deployed within an SDDC architecture. (There is, of course, another side of policy that isn’t being mentioned here, and that’s what Congress attempts to address. But that’s another story for another day.) There is a nice demo of integrations between vCAC and NSX, where vCAC creates NSX security groups (with associated firewall rules) on-demand as part of a blueprint. That’s nice, and will be quite useful I think.

Speaking of NSX, Fathi jumps into a discussion of microsegmentation as a key value-add for VMware NSX in data centers. While microsegmentation was possible before VMware NSX, the value that NSX offers is that microsegmentation now becomes more manageable and more operationally feasible than it’s been in the past. Fathi reminds the audience that this functionality can be driven by policy.

Raghuram summarizes the various demonstrations and announcements, and leaves the stage. Fathi then invites Simone Brunozzi, VP and Chief Technologist for Hybrid Cloud. Brunozzi uses his Google Glass as an excuse to show off a mobile application he uses to “check” on his data center. This leads into a demo that shows vCenter Operations Manager to show the health of vCloud Air resources as well as VMware NSX resources—this shows the expanding reach of vCenter Operations Manager and the tight integration between the various product lines that VMware is advancing. This leads into a demonstration of extending a network from your own data center into vCloud Air, and then migrating workloads (via a copy operation) to vCloud Air directly from within the vSphere Web Client. That’s nothing new, according to Fathi, but what is new is the coordination between firewall rules in NSX and firewall rules in vCloud Air to enable full connectivity between on-premises workloads and workloads in vCloud Air.

Brunozzi next demos using vSphere Replication to replicate to a cloud provider for cloud-based recovery. There was no mention I caught of which providers support this, though I’m sure that vCloud Air via Recovery-as-a-Service does.

At this point I needed to leave to get ready for a session, so I wasn’t able to capture the rest of the keynote. I hope that what I was able to capture was useful—feel free to share any corrections or additions in the comments below. Thanks!

Tags: , , , ,

This is a liveblog of the day 1 keynote at VMworld 2014 in San Francisco, CA. (Well, it’s a “semi-liveblog”; I’ll post this immediately after the keynote session wraps up.) As in previous years, the keynote is being held in the massive space in Moscone North, and this year—like previous years—it looks like there is quite a crowd gathered here this morning. I have the privilege of sitting with newly-minted VCDX-NVs and other VCDXes.

The keynote starts with a catchy, jazzy dance/music piece, and then Robin Matlock, Chief Marketing Officer for VMware, takes the stage. Matlock shares that over 22,000 people are gathered here for VMworld, which (if I recall correctly) is down a little bit from last year. Matlock talks about change, and how change can be either a barrier or an opportunity. That leads into a discussion of the building of the Golden Gate Bridge as an allegory for pushing boundaries, exploring endless possibilities, and expanding your knowledge and expertise. This fits into the “No Limits” theme for the conference this week.

After discussing “Operation Giveback” (a charity fundraiser backed by the VMware Foundation), Matlock turns the stage over to VMware’s CEO, Pat Gelsinger, who comes onto the stage in a dramatic “breaking through” effect.

Gelsinger starts his portion of the keynote by introducing the “liquid world,” where change is paramount. Gelsinger provides some examples of various businesses that have embraced the “liquid business” model. The biggest risk to success, according to Gelsinger, is perpetuating the status quo. Change is inevitable. The best way to predict the future is to invent it, and Gelsinger says that’s what brave IT leaders need to do. This leads Gelsinger into a discussion of what the word “brave” means, and that in turn leads to a family-friendly discussion of the first day of school, a kid’s penalty shot in soccer, a driving test, or a public speaking event. Next up is a “who’s who” list of brave business leaders, like Elon Musk and the leader of 23andMe (I didn’t catch the spelling for her name). VMware is on a voyage of brave transformation itself, and Gelsinger provides examples of VMware’s BYOD efforts and running all of VMware’s SAP ERP efforts on vCloud Air.

Gelsinger brings the “fluid” and “brave” themes together in that brave IT involves fluidity and choice (and a third item I couldn’t catch). This leads into a discussion of the software-defined data center (SDDC). As a result of this discussion, Gelsinger goes into some announcements:

  • VMware is announcing VMware vCloud Suite 5.8, which includes new suite “fit and finish” changes
  • VMware is announcing vSphere 6.0 beta
  • VMware is announcing VSAN 2.0 beta
  • As part of the vSphere 6.0 beta, VMware is also announcing Virtual Volumes (VVols)
  • VMware is rebranding the management suite as VMware vRealize Suite

The path to SDDC has three branches: “build your own”, converged infrastructure, and hyper-converged infrastructure. This leads Gelsinger to announce VMware EVO, which is VMware’s family of hyper-converged infrastructure offerings. EVO is the SDDC packaged with hardware to provide the fastest way to realize the benefits of the SDDC. The first member of the EVO family is EVO:RAIL, which is a “building” block designed at around 100 VMs of capacity and is designed to be up and running in 15 minutes. Hardware partners for EVO:RAIL include Dell, EMC, Fujitsu, Inspur, Net One, and Supermicro.

The EVO Zone, found in the Solutions Exchange, has some examples of EVO products being brought to market.

The second member of the EVO family is EVO:RACK, which is being announced as a technical preview. EVO:RACK builds on EVO:RAIL for greater scale—basically building an “entire cloud” in 2 hours or less. There will be a range of certified hardware solutions available as EVO:RACK solutions. VMware is also joining the OpenCompute Project, and anticipates that EVO:RAIL and EVO:RACK will be available on OCP hardware.

Gelsinger also announces VMware Integrated OpenStack (VIO), VMware’s own OpenStack distribution that will incorporate key VMware technologies like vSphere, VSAN, and NSX. VIO is available in beta.

Continuing the discussion around next-generation platforms, Gelsinger announces partnerships with Docker, Google, and Pivotal around building open container APIs and making containers run better on VMware’s SDDC platform—”containers without compromise.”

That leads Gelsinger into a discussion of security, and that naturally involves VMware NSX. Microsegmentation is becoming a key use case for customers adopting VMware NSX.

After a brief discussion of end-user computing—admittedly brief because Sanjay Poonen will be discussing this in more detail in tomorrow’s keynote—Gelsinger moves on to a discussion of hybrid cloud. VMware vCloud Hybrid Service has been rebranded to vCloud Air, and all VMware products that are offered “as a service” will be rebranded as part of the Air family. Gelsinger brings Bill Fathers, EVP & GM for Hybrid Cloud at VMware, onto the stage.

Fathers recaps what’s happened with vCloud Air over the last 12 months; the service was officially launched last year in the day 1 keynote at VMworld 2013. vCloud Air has been successful in meeting all of the goals originally announced. Fathers reviews the key benefits of using public cloud, especially VMware’s public cloud: agility, efficiency, and cost savings. Fathers discusses the various phases of cloud adoption, and states that he believes the market is currently transitioning from the experimental phase to the professional era.

For companies that are being successful transitioning into the professional era—which involves much greater use of public cloud resources—the two things these companies are doing is 1) thinking of the apps first; and 2) putting the IT professionals back in charge. According to Fathers, vCloud Air provides the perfect platform for accomplishing both of these goals, due in no small part because vCloud Air uses the same hypervisor, same VM formats, and same management tools as companies are using today in their private clouds.

Fathers announces a government-focused FedRAMP-compliant vCloud Air offering launching soon (September); this will be very useful for US government-focused partners, service providers, and systems integrators. VMware is also expanding the number of vCloud Air partners in the vCloud Air Network, with 3900 partners available around the world in over 100 countries. vCloud Air Network is, according to Fathers, the world’s largest hybrid cloud network.

Next Fathers moves into a discussion of new services and offerings available with vCloud Air. These new services include continuous integration as a service offering (CIaaS?), MySQL and MS SQL database-as-a-service offering (will later expand to all popular RDBMS, and will offer DR services for on-premises databases), object storage service based on EMC ViPR, AirWatch and Pivotal CloudFoundry mobile services, and vRealize Air Automation (which is hosted cloud management tools available as a SaaS). Fathers also announcing a beta of on-demand, PAYG (pay as you go) access to vCloud Air.

Fathers transitions back to Gelsinger, who wraps up the themes and announcements from the keynote, and introduces Carl Eschenbach, President and COO of VMware.

Eschenbach reinforces that the “engine of innovation” is alive and well at VMware, but it is the brave who drive change (referring to the VMworld attendees in the audience). That leads into a customer discussion between Eschenbach with three key customers: MedTronic, MIT IST (Information Systems and Technology), and Ford. Each of the customers comes out on the stage and has a brief conversation with Eschenbach.

At this point I had to leave to prepare for a session, so I wasn’t able to capture any of the rest of the keynote.

Tags: , ,

In this post, I’ll share a simple template for deploying Docker containers in an OpenStack environment using Heat. Given that Docker is targeted at application deployment, then I felt that using Heat was a more appropriate way of leveraging Docker in an OpenStack environment as opposed to treating Docker as a form of a hypervisor. Later in this post, I’ll compare this approach to using a more container-aware solution such as fleet.

I assume you’re already familiar with OpenStack Heat and Docker. If you aren’t, take a look at these articles first:

Prerequisites

Before you can actually use Heat to orchestrate Docker containers, there are some prerequisites you’ll need to have done first:

  1. You’ll need to have the Docker plugin for Heat installed. This can be tricky; see here for some instructions that worked for me. To verify that the Docker plugin is working as expected, run heat resource-type-list and check the output for “DockerInc::Docker::Container”. If that resource type is included in the output, then the Docker plugin is working as expected.
  2. Any Docker hosts you’re running must have Docker configured to listen on a network-accessible socket. I was running CoreOS in my environment, so I followed the instructions to make Docker on CoreOS listen on a TCP socket. (In case the link doesn’t take you to the right section, see the section titled “Enable the Remote API on a New Socket.”) In my case, I selected TCP port 2345. Make note of whatever port you select, as you’ll need it in your template.
  3. Any Docker hosts that will be orchestrated by Heat must have an IP address assigned that is reachable from the server where Heat is running (typically the cloud controller). In my case, I use Neutron with NSX, so I had to assign floating IPs to the instances with which Heat would be communicating.
  4. You’ll need to be sure that the TCP port you select for Docker to use (I used TCP port 2345) is accessible, so modify any security groups assigned to the instances to allow inbound TCP traffic on that port from the appropriate sources.

Once these prerequisites are addressed—Docker plugin installed and working, Docker listening on a TCP port, instance reachable from cloud controller on selected TCP port—then you’re ready to go.

Template for Docker Orchestration

Here is a sample template that will create a Docker container on an existing instance:

(Click here if you don’t see the code block above.)

As I said, this is pretty simple. The image property is the name of the Docker image you want to use; in this case, I’m using an image containing the popular Nginx web server. The docker_endpoint property should be a URL that specifies the protocol (TCP), IP address (in my case, a floating IP address assigned to the instance), and the port number on which the Docker daemon is listening. Note that the format for this property isn’t documented anywhere I’ve found.

In the “stable/icehouse” branch of the Docker plugin (required if you’re using distro packages for your OpenStack installation, as I am), there are some additional properties available as well. Unfortunately, without any documentation on what these properties should look like, I was unable to make it work with any of those properties included. In particular, the port_specs property, which controls how ports in a Docker container are exposed to the outside world, would have been very useful and applicable. However, I was unable to make it work with the port_specs attribute included. If anyone has information on the exact syntax and format for the port_specs property in the “stable/icehouse” branch of the plugin, please speak up in the comments.

Naturally, you could embed this portion of YAML code into a larger HOT-formatted template that also launched instances, created Neutron networks, attached the instances to Neutron networks, created a logical router, and mapped a floating IP address to the instance. I leave the creation of such a template as an exercise for the reader, but I will point out that I’ve already shared with you almost all the pieces necessary to do exactly that. (See the blog posts I provided earlier.)

Summary

I mentioned at the start of this post that I’d provide some comparison to other methods for deploying containers in an automated fashion. With that in mind, here are a few points you’ll want to consider:

  • There is no container scheduling in this solution. Containers are statically mapped to a container host (the VM instance, in this case, although this could be a bare metal host running Docker as well). Other solutions, like fleet, at least let you just point to a cluster of systems instead of a specific system. (See this write-up on fleet for more information.)
  • Docker must be listening on a TCP socket. This isn’t Docker’s default configuration, so this is an additional change that must be incorporated into the environment. Fleet doesn’t have this requirement, although other solutions such as Mesos might (I haven’t tested any other solutions—yet.)
  • There is very little documentation available right now. Note that this may be true for other solutions as well (this entire space is relatively new and growing/evolving rapidly). Regardless, until someone can at least figure out how to expose Docker containers to the network via a Heat template, this isn’t very useful.

My initial assessment is that OpenStack needs container scheduling, not static assignment, in order for Docker integration into OpenStack to be truly useful. Proponents of the Nova-Docker approach (treating Docker as a hypervisor and Docker images as Glance images) point to their approach as superior because of the integration of Nova’s scheduling functionality. It will be interesting to see how things develop on this front.

If you have any questions, have more information to share, or have corrections or clarifications to any of the information presented here, please speak up in the comments.

Tags: , , ,

CoreOS Continued: Fleet and Docker

This post is the third in a series of posts on CoreOS, this time focusing on the use of fleet and Docker to deploy containers across a cluster of systems. This post builds on my earlier introduction to CoreOS and the subsequent more in-depth look at etcd.

I’m assuming that you’re already reasonably familiar with CoreOS, etcd, and Docker. If you aren’t familiar with CoreOS or etcd, have a look at the links in the previous paragraph. If you need a quick introduction to Docker, check out this quick introduction to Docker. While the example I’m going to provide here is fairly simple, it should serve as a reasonable basis upon which to build later.

An Overview of Fleet

The GitHub page for fleet describes it as a “distributed init system” that operates across a cluster of machines instead of on a single machine. It leverages etcd, the distributed key-value store that ships with CoreOS, as well as systemd. Fleet combines etcd and systemd to allow users to deploy containers (configured as systemd units) across a cluster of CoreOS systems.

Using fleet, users can deploy a single container anywhere on the cluster, deploy multiple copies of the same container, ensure that containers run on the same machine (or different machines), or maintain a certain number of instances of a service (thus protecting against failure).

Note that even though fleet helps with scheduling containers across a cluster of systems, fleet doesn’t address some of the other significant challenges that arise from an architecture based on distributed micro-services in containers. Namely, fleet does not address inter-container communications, service registration, service discovery, or any form of advanced utilization-based scheduling. These are topics I hope to be able to explore here in the near future.

Now that you have an idea of what fleet does, let’s take a closer look at actually using fleet.

Interacting with Fleet

By default, the fleetctl command-line client that is provided to interact with fleet assumes it will be interacting with a local etcd endpoint on the loopback address. So, if you want to run fleetctl on an instance of CoreOS in your cluster, no further configuration is needed.

However, it may be easier/more effective to use fleetctl from outside the cluster. There are a couple of different ways to do this: you can tell fleetctl to use a specific endpoint, or you can tunnel the traffic through SSH. Each approach has advantages and disadvantages; I’ll leave it to the readers to determine which approach is the best approach for their specific configurations/situations.

Using a Custom Endpoint

This method is pretty straightforward and simple. Just set an environment variable named FLEETCTL_ENDPOINT, like this:

export FLEETCTL_ENDPOINT=http://10.1.1.7:4001

Obviously, you’d want to make sure that you have the correct IP address (can be any node in the etcd cluster) and port (4001 is the default, I believe). With this environment variable set, now anytime you use fleetctl it will direct traffic to the endpoint you specified. If that specific node in the etcd cluster becomes unavailable, then fleetctl will stop working, and you’ll need to point it to a different node in the cluster.

Tunneling Through SSH

The second way of using fleetctl remotely is to tunnel the traffic through SSH. This method may be a bit more complicated, but naturally offers a bit more security.

To make fleetctl tunnel its communications with etcd through SSH, set an environment variable called FLEETCTL_TUNNEL to the IP address of any node in the etcd cluster, like this:

export FLEETCTL_TUNNEL=10.1.1.7

However, the configuration involves more than just setting the environment variable. The fleetctl doesn’t expose any options to configure the SSH connection, and it assumes you’ll be using public key authentication. This means you’ll need access to a public key that will work against the nodes in your etcd cluster. If you followed my instructions on deploying CoreOS on OpenStack via Heat, then you can review the Heat template to see which key was specified to be injected when the instances were spawned. Once you know which key was used, then you’ll need to either:

  • place that key on the system where fleetctl is installed, or
  • install fleetctl on a system that already has that key present.

There’s still at least one more step required (possibly two). Because fleetctl doesn’t expose any SSH options, you’re going to need to run an SSH agent on the system you’re using. OS X provides an SSH agent by default, but on Linux systems you will probably have to manually run an SSH agent and add the appropriate SSH key:

eval `ssh-agent -s`
ssh-add ~/.ssh/keyfile.pem

Once the SSH agent is running and the appropriate key is loaded (you’d clearly need to make sure the path and filename are correct in the command listed above), then the last step is to configure your ~/.ssh/config file with options for the CoreOS instances. It’s possible you might be able to get by without this step; I haven’t conducted enough testing to say with absolute certainty one way or another. I suspect it will be needed.

In the ~/.ssh/config file, add a stanza for the system through which you’ll be tunneling the fleetctl traffic. The stanza will need to look something like this:

Host node-01
  User core
  Hostname 10.1.1.7
  IdentityFile ~/.ssh/keyfile.pem

This configuration stanza ensures that when the system you’re using attempts to communicate with the IP address listed above, it will use the specified username and public key. Since the SSH agent is loaded, it won’t prompt for any password for the public key (even if the public key doesn’t have a password associated, you’ll still need the SSH agent), and the SSH connection will be successful without any user interaction. That last point is important—fleetctl doesn’t expose any SSH options, so the connection needs to be completely automatic.

Once you have all these pieces in place, then you can simply run fleetctl with the appropriate commands (described in the next section), and the connection to the etcd cluster will happen over SSH via the specified host. Naturally, if that node in the cluster goes away or is unavailable, you’ll need to point your connection to a different node in the etcd cluster.

Using Fleet

Once you have access to the etcd cluster via fleetctl using one of the three methods described above (direct access via a CoreOS instance, setting a custom endpoint, or tunneling over SSH), then you’re ready to start exploring how fleet works.

First, you can list all the machines in the cluster with this command:

fleetctl list-machines

Note the “METADATA” column; this allows you to do some custom scheduling by associating systemd units with specific metadata parameters. Metadata can be assigned either via cloud-config parameters passed when the instance is spawned, or via modifications to the fleet config files.

To see the units about which the cluster knows, use this command:

fleetctl list-units

If you’re just getting your etcd cluster up and running, the output of this command is probably empty. Let’s deploy a unit that spawns a Docker container running the popular Nginx web server. Here’s a (very) simple unit file that will spin up an Nginx container via Docker:

(If you can’t see the code block above, click here.)

With this file in place on the system where you are running fleetctl, you can submit this to the etcd cluster with this command:

fleetctl submit nginx.service

Then, when you run fleetctl list-units, you’ll see the new unit submitted (but not started). Start it with fleetctl start nginx.service.

Where fleet becomes really useful (in my opinion) is when you want to run multiple units across the cluster. If you take the simple Nginx unit I showed you earlier and extend it slightly, you get this:

(Click here if you can’t see the code block above.)

Note the difference here: the Docker container name is changed (to nginx-01) and the filename is different (now nginx.1.service). If you make multiple copies of this file, changing the Docker container name and the unit filename, you can submit all of the units to the etcd cluster at the same time. For example, let’s say you wanted to run 3 Nginx containers on the cluster. Make three copies of the file (nginx.1.service, nginx.2.service, and nginx.3.service), modifying the container name in each copy. Make sure that you have the “X-Conflicts” line in there; that tells fleet not to place two Nginx containers on the same system in the cluster. Then submit them with this command:

fleetctl submit nginx.*.service

And start (launch) them with this command:

fleetctl start nginx.*.service

Give it a few minutes to download the latest Nginx Docker image (assuming it isn’t already downloaded), then run fleetctl list-units and you should see three Nginx containers distributed across three different CoreOS instances in the etcd cluster, all listed as “loaded” and “active”. (You can then test connectivity to those Nginx instances using something like curl.) Congratulations—you’ve just deployed multiple containers automatically across a cluster of systems!

(Want to see some of the magic behind fleet? Run etcdctl --peers <IP address of cluster node>:4001 ls /_coreos.com --recursive and see what’s displayed. You’re welcome.)

Admittedly, this is a very simple example. However, the basic architecture I’ve shown you here can be extended. For example, by using additional fleet-specific properties like “X-ConditionMachineOf” in your unit file(s), you can run what is known as a “sidekick container.” These containers do things like update an external load balancer, or register the presence of the “primary” container in some sort of service discovery mechanism. (In fact, as I alluded to in my etcd post, you could use etcd as that service discovery mechanism.)

Naturally, fleetctl includes commands for stopping units, destroying units, etc., as well as submitting and starting units. You can use fleetctl help to get more information, or visit the fleet GitHub page.

I hope you’ve found this post to be helpful. Feel free to post any questions, corrections, clarifications, or thoughts in the comments below. Courteous comments are always welcome.

Tags: , , ,

CoreOS Continued: etcd

In this post, I’m going to build on my earlier introduction to CoreOS by taking a slightly more detailed look at etcd. etcd is a distributed key-value store (more on that in a moment) and is one of the key technologies that I feel distinguishes CoreOS from other Linux distributions.

etcd is not, of course, the only distributed key-value store in town. Zookeeper is another very well-known distributed information store. I’m focusing on etcd here for two reasons. First, it’s distributed as part of CoreOS, which means that it isn’t necessarily a separate piece of software that you have to install and configure (although some configuration is needed, as I’ll attempt to show). Second, future blog posts will talk about tools like fleet and others that are being built to leverage etcd, so I want to provide this overview here as a foundation for later posts.

Data Organization in etcd

As I’ve mentioned already, etcd is a distributed key-value store. The data within etcd is organized hierarchically, somewhat like a file system. The top of the hierarchy is “/” (root), and from there users can create paths and keys (with values). Later in this post, I’ll show you how that’s done using both the RESTful etcd API as well as using the etcdctl command-line client.

First, though, let’s look at an example. Let’s say that you wanted to store information about IP endpoints that were running a particular service. You could organize the data within etcd like this:

/ (root, already exists when etcd starts)
/svclocation (path or directory within etcd)
/svclocation/instance1 = 10.1.1.20 (a key in etcd and it’s associated value)
/svclocation/instance2 = 10.1.1.21 (another key and associated value)

Because etcd is a distributed key-value store, both the organization of the information as well as the actual information is distributed across all the members of an etcd cluster. If you’re interested in more details on exactly how the data is propagated within an etcd cluster, see here.

Configuration of etcd

etcd can be configured via command-line flags, environment variables, or configuration file. By default, etcd on CoreOS uses a configuration file generated by cloud-init to set environment variables that affect how etcd operates. My earlier post on deploying CoreOS on OpenStack with Heat provides a brief example of using cloud-init to configure etcd on CoreOS during deployment.

For example, when you configure the addr, peer-addr, or discovery properties via cloud-init, CoreOS puts those values into a file named 20-cloudinit.conf in the /run/systemd/system/etcd.service.d/ directory. That file takes those values and assigns them to the ETCD_ADDR, ETCD_PEER_ADDR, and ETCD_DISCOVERY environment variables, respectively. etcd will read those environment variables when it starts, thus controlling the configuration of etcd. (By the way, the use of the etcd.service.d directory for configuration files is a systemd thing.)

Interacting with etcd

Like most projects these days, etcd provides a RESTful API by which you can interact with it. You can use the command-line utility curl to interact with etcd, using some of the techniques I outlined in this post on interacting with RESTful APIs. Here’s a very simple example: using curl to recursively list the keys and values in a path within etcd:

curl -X GET http://10.1.1.7:4001/v2/keys/?consistent=true&recursive=true&sorted=false

If you want to store some data in etcd (let’s say you wanted to store the value “10.1.1.20:80″ at the key “/svclocation”), then you’d do something like this:

curl -L http://10.1.1.7:4001/v2/keys/svclocation -X PUT -d value="10.1.1.20:80"

The JSON response (read this if you’re unfamiliar with JSON) will provide confirmation that the key and value were set, along with some additional information.

To read this value back, you’d use curl like this:

curl -L http://10.1.1.7:4001/v2/keys/svclocation

etcd would respond with a JSON-formatted response that provides the current value of the specified key.

However, to make it easier, there is a command-line client called etcdctl that you can use to interact with etcd. There are both Linux and OS X versions available from the etcd GitHub page; just make sure you download the version that corresponds to the version of etcd that’s running on your CoreOS instance(s). (To determine the version of etcd running on a CoreOS instance, log into the CoreOS instance via SSH and run etcd --version.)

Then, to perform the same listing of keys action as the curl command I provided earlier, you would run this command:

etcdctl --peers 10.1.1.7:4001 ls / --recursive

Similarly, to set a value at a particular key within etcd:

etcdctl --peers 10.1.1.7:4001 mk /svclocation 10.1.1.20:80

And to get a value:

etcdctl --peers 10.1.1.7:4001 get /svclocation

A couple of points to note:

  • By default, etcdctl works against a local instance of etcd. In other words, it defaults to connecting to 127.0.0.1:4001. You’ll have to use the --peers flag if you’re running it external to a CoreOS instance. However, if you’re running it directly from a CoreOS instance, you can omit the --peers flag.
  • You can point etcdctl against any CoreOS instance in an etcd cluster, because the information stored in etcd is shared and distributed across all the members of the cluster. (That’s kind of the whole point behind it.) Note that the --peers option supports listing multiple peers in an etcd cluster.

By now, you might be thinking, “OK, this is interesting, but what does it really do for me?” By itself…not necessarily a whole lot. Where etcd starts to become quite useful is when there are other systems that start to leverage information stored in etcd. A couple of examples come to mind:

  1. Distributed micro-service architectures—where applications are made up of a group of containerized services distributed across a compute farm—need mechanisms for service registration (registering that a particular service is available) and service discovery (finding out where a particular service is running). etcd could be helpful in assisting with both of these tasks (writing key-value pairs into etcd for registration, reading them back for discovery).

  2. “Converting” etcd information into traditional configuration files allows applications that are not etcd-aware to still take advantage of etcd’s distributed architecture. For example, there is a project called confd that takes information stored in etcd and turns it into “standard” configuration files. This means, for example, that you could store dynamic configuration details in etcd, and then use confd to update static configuration files. (A common example of using this is updating an HAProxy configuration file via confd as back-end web server containers start up and shut down. Astute readers will recognize this as a form of service registration and service discovery.)

I’ll be building on some of these concepts in future posts. In the meantime, feel free to post any questions, clarifications, or corrections in the comments below.

Tags: , , ,

I mentioned this on Twitter a few days ago, but wanted to go ahead and formalize some of the details. A blog reader/Twitter follower contacted me with the idea of getting together briefly at VMworld 2014 for some prayer time. I thought it was a great idea (thanks David!), so here are the details.

What: A brief time of prayer
Where: Yerba Buena Gardens, behind Moscone North
When: Monday 8/25 through Wednesday 8/27 at 7:45am (this should give everyone enough time to grab breakfast before the keynotes start at 9am)
Who: All courteous attendees are welcome, but please note that this will be a distinctly Christian-focused and Christ-centric activity. (I encourage believers of other faiths/religions to organize equivalent activities.)
Why: To spend a few minutes in prayer over the day, the conference, and the attendees

There’s no need to RSVP or let me know that you’ll be there (although you are welcome to do so if you’d like, just so I have an idea of how many will be attending). This will be very informal and very casual—it’s just an opportunity for fellow followers of Christ to get together and say a few words of prayer.

I look forward to seeing you there!

Tags: , ,

In this post, I’m going to illustrate one way to deploy CoreOS on OpenStack using Heat. By no means is this intended to be seen as the only way to use Heat to deploy CoreOS, but rather as one way of using Heat to deploy CoreOS. I’m publishing this in the hopes that others will be able to use this as a building block for their own deployments.

If you aren’t already familiar with OpenStack Heat or CoreOS, you might want to take a moment and refer to this introductory posts for some foundational information:

Moving forward, OpenStack Heat is trying to standardize on OpenStack resource types (like OS::Nova::Server) and the HOT format (using YAML). Therefore, the Heat template I’m presenting here will use OpenStack resource types and YAML. Note that it’s certainly possible to do this using CloudFormation (CFN) resource types and JSON formatting. I’ll leave the conversion of the template found here into CFN/JSON as an exercise for the readers.

Here’s the example Heat template you can use to deploy and customize CoreOS on OpenStack:

(Click here if you can’t see the code block above.)

Let’s walk through this template real quick:

  • On line 9, you’ll need to provide the ID for the Neutron network to which the new CoreOS instance(s) should connect. You can get this a couple of different ways; running neutron net-list is one way.
  • On line 14, you’ll need to supply the ID for the CoreOS image you’ve uploaded into Glance. Again, there are multiple ways to obtain this; running glance image-list is one way of getting that information.
  • On line 22, replace the text (including the “<” and “>” symbols) with the ID of the security group you want applied to the CoreOS instance(s) being deployed. The neutron security-group-list command can give you the information you need to put here.
  • On line 31, supply the name of the SSH key you want to inject into the instance(s).
  • On line 37, you’ll need to generate a unique cluster ID to place here for the configuration of etcd within the CoreOS instance(s). You can generate a new ID (also called a token) by visiting https://discovery.etcd.io/new. That will return another URL that contains the new etcd cluster token. Supply that token here to create a new etcd cluster out of the CoreOS instance(s) you’re deploying with this template.
  • This template only deploys a single CoreOS instance. To deploy multiple CoreOS instances, you’ll need a separate OS::Neutron::Port and OS::Nova::Server resource for each instance. For each Neutron port, you can reference the same security group ID and network ID. For each instance, you can reference the same Glance image ID, same SSH key, and same etcd cluster token; the only thing that would change with each instance is line 30. Line 30 should point to a unique Neutron port resource created for each instance (something like instance1_port0, instance2_port0, etc.).

Now, there are obviously lots of other things you could do here—you could create your own Neutron network to host these CoreOS instances, you could create a logical router to provide external connectivity (which is required, by the way, in order for the etcd cluster token discovery to work correctly), and you could create and assign floating IPs to the instances. Examples of some of these tasks are in the articles I provided earlier; others are left as an exercise for the reader. (Or I’ll write up something later. We’ll see.)

Once you have your template, you can deploy the stack using Heat, and then—after your CoreOS cluster is up and running—begin to deploy applications to the cluster using tools like fleet. That, my friends, is another story for another day.

Any questions? Corrections? Clarifications? Feel free to start (or join) the discussion below. All courteous comments are welcome.

Tags: , , ,

A new startup emerged from stealth today, a company called Platform9. Platform9 was launched by former VMware veterans with the goal of making it easy for companies to consume their existing infrastructure in an agile, cloud-like fashion. Platform9 seeks to accomplish this by offering a cloud management platform that is itself provided as a cloud-based service—hence the name of this post, “cloud-hosted cloud management.”

It’s an interesting approach, and it certainly helps eliminate some of the complexity that organizations face when implementing their own cloud management platform. For now, at least, that is especially true for OpenStack, which can be notoriously difficult for newcomers to the popular open source cloud management environment. By Platform9 offering an OpenStack API-compatible service, organizations that want a more “public cloud-like” experience can get it without all the added hassle.

The announcements for Platform9 talk about support for KVM, vSphere, and Docker, though the product will only GA with KVM support (support for vSphere and Docker are on the roadmap). Networking support is also limited; in the initial release, Platform9 will look for Linux bridges with matching names in order to stitch together networks. However, customers will get an easy, non-disruptive setup with a nice set of dashboards to help show how their capacity is being utilized and allocated.

It will be interesting to see how things progress for Platform9. The idea of providing cloud management via an SaaS model (makes me think of “cloud inception”) is an interesting one that does sidestep many adoption hurdles, though questions of security, privacy, confidentiality, etc., may still hinder adoption in some environments.

Thoughts on Platform9? Feel free to speak up in the comments below. All courteous comments are welcome!

Tags: , , ,

In this post, I’ll show you how I got Arista’s vEOS software running under KVM to create a virtualized Arista switch. There are a number of other articles that help provide instructions on how to do this, but none of those that I found included the use of libvirt and/or Open vSwitch (OVS).

In order to run vEOS, you must first obtain a copy of vEOS. I can’t provide you with a copy; you’ll have to register on the Arista Networks site (see here) in order to gain access to the download. The download consists of two parts:

  1. The Aboot ISO, which contains the boot loader
  2. The vEOS disk image, provided as a VMware VMDK

Both of these are necessary; you can’t get away with just one or the other. Further, although the vEOS disk image is provided as a VMware VMDK, KVM/QEMU is perfectly capable of using the VMDK without any conversion required (this is kind of nice).

One you’ve downloaded these files, you can use the following libvirt domain XML definition to create a VM for running Arista vEOS (you’d use a command like virsh define <filename>).

(Click here if you can’t see the code block above.)

There are a few key things to note about this libvirt domain XML:

  • Note the boot order; the VM must boot from the Aboot ISO first.
  • Both the Aboot ISO as well as the vEOS VMDK are attached to the VM as devices, and you must use an IDE bus. Arista vEOS will refuse to boot if you use a SCSI device, so make sure there are no SCSI devices in the configuration. Pay particular attention to the type= parameters that specify the correct disk formats for the ISO (type “raw”) and VMDK (type “vmdk”).
  • For the network interfaces, you’ll want to be sure to use the e1000 model.
  • This example XML definition includes three different network interfaces. (More are supported; up to 7 interfaces on QEMU/KVM.)
  • This XML definition leverages libvirt integration with OVS so that libvirt automatically attaches VMs to OVS and correctly applies VLAN tagging and trunking configurations. In this case, the network interfaces are attaching to a portgroup called “trunked”; this portgroup trunks VLANs up to the guest domain (the vEOS VM, in this case). In theory, this should allow the vEOS VM to support VLAN trunk interfaces, although I had some issues making this work as expected and had to drop back to tagged interfaces.

Once you have the guest domain defined, you can start it by using virsh start <guest domain name>. The first time it boots, it will take a long time to come up. (A really long time—I watched it for a good 10 minutes before finally giving up and walking away to do something else. It was up when I came back.) According to the documentation I’ve found, this is because EOS needs to make a backup copy of the flash partition (which in this case is the VMDK disk image). It might be quicker for you, but be prepared for a long first boot just in case.

Once it’s up and running, use virsh vncdisplay to get the VNC display of the vEOS guest domain, then use a VNC viewer to connect to the guest domain’s console. You won’t be able to SSH in yet, as all the network interfaces are still unconfigured. At the console, set an IP address on the Management1 interface (which will correspond to the first virtual network interface defined in the libvirt domain XML) and then you should have network connectivity to the switch for the purposes of management. Once you create a username and a password, then you’ll be able to SSH into your newly-running Arista vEOS switch. Have fun!

For additional information and context, here are some links to other articles I found on this topic while doing some research:

If you have any questions or need more information, feel free to speak up in the comments below. All courteous comments are welcome!

Tags: , , , , ,

Welcome to Technology Short Take #43, another episode in my irregularly-published series of articles, links, and thoughts from around the web, focusing on data center technologies like networking, virtualization, storage, and cloud computing. Here’s hoping you find something useful.

Networking

  • Jason Edelman recently took a look at Docker networking. While Docker is receiving a great deal of attention, I have to say that I feel Docker networking is a key area that hasn’t received the amount of attention that it probably needs. It would be great to see Docker get support for connecting containers directly to Open vSwitch (OVS), which is generally considered the de facto standard for networking on Linux hosts.
  • Ivan Pepelnjak asks the question, “Is OpenFlow the best tool for overlay virtual networks?” While so many folks see OpenFlow as the answer regardless of the question, Ivan takes a solid look at whether there are better ways of building overlay virtual networks. I especially liked one of the last statements in Ivan’s post: “Wouldn’t it be better to keep things simple instead of introducing yet-another less-than-perfect abstraction layer?”
  • Ed Henry tackles the idea of abstraction vs. automation in a fairly recent post. It’s funny—I think Ed’s post might actually be a response to a Twitter discussion that I started about the value of the abstractions that are being implemented in Group-based Policy (GBP) in OpenStack Neutron. Specifically, I was asking if there was value in creating an entirely new set of abstractions when it seemed like automation might be a better approach. Regardless, Ed’s post is a good one—the decision isn’t about one versus the other, but rather recognizing, in Ed’s words, “abstraction will ultimately lead to easier automation.” I’d agree with that, with one change: the right abstraction will lead to easier automation.
  • Jason Horn provides an example of how to script NSX security groups.
  • Interested in setting up overlays using Open vSwitch (OVS)? Then check out this article from the ever-helpful Brent Salisbury on setting up overlays on OVS.
  • Another series on VMware NSX has popped up, this time from Jon Langemak. Only two posts so far (but very thorough posts), one on setting up VMware NSX and another on logical networking with VMware NSX.

Servers/Hardware

Nothing this time around, but I’ll keep my eyes open for more content to include next time.

Security

  • Someone mentioned I should consider using pfctl and its ability to automatically block remote hosts exceeding certain connection rate limits. See here for details.
  • Bromium published some details on a Android security flaw that’s worth reviewing.

Cloud Computing/Cloud Management

  • Want to add some Docker to your vCAC environment? This post provides more details on how it is done. Kind of cool, if you ask me.
  • I am rapidly being pulled “higher” up the stack to look at tools and systems for working with distributed applications across clusters of servers. You can expect to see some content here soon on topics like fleet, Kubernetes, Mesos, and others. Hang on tight, this will be an interesting ride!

Operating Systems/Applications

  • A fact that I think is sometimes overlooked when discussing Docker is access to the Docker daemon (which, by default, is accessible only via UNIX socket—and therefore accessible locally only). This post by Adam Stankiewicz tackles configuring remote TLS access to Docker, which addresses that problem.
  • CoreOS is a pretty cool project that takes a new look at how Linux distributions should be constructed. I’m kind of bullish on CoreOS, though I haven’t had nearly the time I’d like to work with it. There’s a lot of potential, but also some gotchas (especially right now, before a stable product has been released). The fact that CoreOS takes a new approach to things means that you might need to look at things a bit differently than you had in the past; this post tackles one such item (pushing logs to a remote destination).
  • Speaking of CoreOS: here’s how to test drive CoreOS from your Mac.
  • I think I may have mentioned this before; if so, I apologize. It seems like a lot of folks are saying that Docker eliminates the need for configuration management tools like Puppet or Chef. Perhaps (or perhaps not), but in the event you need or want to combine Puppet with Docker, a good place to start is this article by James Turnbull (formerly of Puppet, now with Docker) on building Puppet-based applications inside Docker.
  • Here’s a tutorial for running Docker on CloudSigma.

Storage

  • It’s interesting to watch the storage industry go through the same sort of discussion around what “software-defined” means as the networking industry has gone through (or, depending on your perspective, is still going through). A few articles highlight this discussion: this one by John Griffith (Project Technical Lead [PTL] for OpenStack Cinder), this response by Chad Sakac, this response by the late Jim Ruddy, this reply by Kenneth Hui, and finally John’s response in part 2.

Virtualization

  • The ability to run nested hypervisors is the primary reason I still use VMware Fusion on my laptop instead of switching to VirtualBox. In this post Cody Bunch talks about how to use Vagrant to configure nested KVM on VMware Fusion for using things like DevStack.
  • A few different folks in the VMware space have pointed out the VMware OS Optimization Tool, a tool designed to help optimize Windows 7/8/2008/2012 systems for use with VMware Horizon View. Might be worth checking out.
  • The VMware PowerCLI blog has a nice three part series on working with Customization Specifications in PowerCLI (part 1, part 2, and part 3).
  • Jason Boche has a great collection of information regarding vSphere HA and PDL. Definitely be sure to give this a look.

That’s it for this time around. Feel free to speak up in the comments and share any thoughts, clarifications, corrections, or other ideas. Thanks for reading!

Tags: , , , , , , , , , , ,

« Older entries § Newer entries »