Scott's Weblog The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view

Looking Under the Hood: containerD

This is a liveblog of the session titled “Looking Under the Hood: containerD”, presented by Scott Coulton with Puppet (and also a Docker Captain). It’s part of the Edge track here at DockerCon EU 2017, where I’m attending and liveblogging as many sessions as I’m able.

Coulton starts out by explaining the session (it will focus a bit more on how to consume containerD in your own software projects), and provides a brief background on himself. Then he reviews the agenda, and dives right into the content.

Up first, Coulton starts by providing a bit of explanation around what containerD is and does. He notes that there is a CLI tool for containerD (the ctr tool), and that containerD uses a gRPC API listening on a local UNIX socket. Coulton also discusses ctr, but points out that ctr is, currently, an unstable tool (it is changing quickly). Next, Coulton talks about how containerD provides support for the OCI Image Spec and the OCI Runtime Spec (of which runC is an implementation), image push/pull support, and management of namespaces.

Coulton moves into a demo showing off some of containerD’s functionality, using the ctr tool.

After the demo, Coulton talks about some other upstream projects that use containerD. Those projects include Moby, cri-containerD; he notes that containerD itself can leverage OCI-compliant runtimes (that adhere to the OCI Runtime Spec). Mostly containerD is used in Moby, LinuxKit, and Kubernetes.

This leads Coulton into a discussion of the Moby Project, which is the upstream open source project that then feeds into the commercial products Docker CE and Docker EE. Various components of Moby include SwarmKit, HyperKit, InfraKit, LinuxKit, containerD, and runC.

Coulton also takes a moment to explain what’s in Docker but isn’t in containerD. As examples, containerD won’t build images, and it lacks any of the access controls or content security features that Docker may have. To help illustrate some of these differences, Coulton shows a demo of Docker running inside a container managed by containerD.

Switching gears a bit now, Coulton changes topics to focus on LinuxKit. LinuxKit is a lean, composable OS where everything runs as a container. Currently, LinuxKit is running kernel 4.9, with newer versions listed as “experimental.” LinuxKit supports any container runtime, which means you could use containerD/runC. LinuxKit has some advantages over “traditional” OSes in that it is smaller, with a smaller attack surface, and designed specifically for use with immutable infrastructure patterns. By default, a LinuxKit image contains an init image, containers for containerD and runC, and some CA certificates—that’s all.

This brings us to a demo by Coulton of building a LinuxKit image. Because LinuxKit is so small and builds quickly, it’s a pretty short demo. To build a LinuxKit image, you’d use the moby tool and a YAML file that specifies what should be included in the LinuxKit image.

Moving on from LinuxKit, Coulton now turns to Kubernetes and containerD. In order for containerD to interact with Kubernetes, the Kubelet talks to a CRI (Container Runtime Interface) shim via gRPC, and the shim in turn talks to a container runtime (such as containerD). The CRI shim needed in the case of containerD is cri-containerd.

At this point, Coulton wraps up his content and opens up the session to questions from the audience.

Building a Secure Supply Chain

This is a liveblog of the session titled “Building a Secure Supply Chain,” part of the Using Docker track at DockerCon EU 2017 in Copenhagen. The speakers are Ashwini Oruganti (@ashfall on Twitter) and Andy Clemenko (@aclemenko on Twitter), both from Docker. This session was recommended in the Docker EE deep dive (see the liveblog for that session) as a way to get more information on Docker Content Trust (image signing). The Docker EE deep dive presenter only briefly discussed Content Trust, so I thought I’d drop into this session to get more information.

Oruganti starts the session by reviewing some of the steps in the software lifecycle: planning, development, testing, packaging/distribution, support/maintenance. From a security perspective, there are some additional concepts as well: code origins, automated builds, application signing, security scanning, and promotion/deployment. Within Docker EE, there are three features that help with the security aspects of the lifecycle: signing, scanning, and promotion. (Note that scanning and promotion were also discussed in the Docker EE deep dive, which I liveblogged; link is in the first paragraph).

Before getting into the Docker EE features, Clemenko reminds attendees how not to do it: manually. This approach doesn’t scale, and leaves organizations open to holes. With respect to the software supply chain, there are two starting points: the Docker Store and your source code repositories. Clemenko briefly plugs the Docker Store, Docker Certified items, and Docker Official images. Moving past the Certified and Official images is only recommended, according to Clemenko, if you the images are automated builds, you have access to the Dockerfile, and it makes sense. If these aren’t possible, then you should look at building your own, and Clemenko briefly reviews some recommendations around the types of files that should be stored in a source code repository.

Clemenko next reminds the audience that automated builds are strongly recommended. One reason would be to provide an audit trail that shows the history of image builds.

Oruganti now takes over again to dig into the related Docker EE features. First up is image signing (Docker Content Trust). Oruganti reviews a few aspects of Docker Content Trust, and then goes into a new feature of the docker trust functionality that is intended to make image signing much easier. The demo of the new feature (which is apparently the docker trust sign command) shows how you could use UCP to set a signing policy, refuse to run applications that don’t meet the signing policy, and then easily sign an image from the CLI.

Next, Oruganti moves on to image scanning. She doesn’t spend much time here at all, and quickly moves on to talking about image promotion. As covered in the Docker EE deep dive session, image promotion is about moving “blessed” images between repositories within a single registry. Oruganti hands it off to Clemenko to do a demo of the image promotion functionality.

Clemenko’s demo makes some changes to the source code, perform a git commit and git push to a GitLib repository. GitLab CI runs a build and pushes it to a Docker Trusted Registry (DTR) instance. The DTR repository gets updated automatically by GitLab, and then a promotion policy (using the criterion that there are no vulnerabilities) controls whether images pushed to this DTR repository will be automatically promoted to another repository (still within the same DTR instance). This is all the same stuff I saw in the Docker EE deep dive, but Clemenko also adds a webhook that calls GitLab CI again after promotion to sign the promoted image (using the new docker trust functionality Oruganti mentioned).

Next Clemenko sets up a demo that he knows will fail (he uses an older version of an image that contains known vulnerabilities). As expected/planned, the demo appropriately flags the image coming from the GitLab CI has vulnerabilities, and therefore the promotion policy is not triggered (the image is not promoted to the public repository).

Oruganti and Clemenko wrap up the session by pointing attendees to the same 4 hour trial available of Docker EE, and then open the session up to questions from the audience.

Docker EE Deep Dive

This is a liveblog of the session titled “Docker EE Deep Dive,” part of the Docker Best Practices track here at DockerCon EU 2017 in Copenhagen, Denmark. The speaker is Patrick Devine, a Product Manager at Docker. I had also toyed with the idea of attending the Cilium presentation in the Black Belt track, but given that I attended a version of that talk in Austin in April (liveblog is here), I figured I’d better stretch my boundaries and dig deeper into Docker EE.

Devine starts with a bit of information on his background, then provides an overview of the two editions (Community and Enterprise) of Docker. (Recall again that Docker is the downstream product resulting from the open source Moby upstream project.) Focusing a bit more on Docker EE, Devine outlines some of the features of Docker EE: integrated orchestration, stable releases for 1 year with support and maintenance, security patches and hotfixes backported to all supported versions, and enterprise-class support.

So what components are found in Docker EE? It starts with the Docker Engine, which has the core container runtime, orchestration, networking, volumes, plugins, etc. On top of that is Universal Control Plane (UCP), which provides the higher-level management functions. UCP managers are typically clustered, and they leverage the Raft consensus protocol for clustering (Raft is the protocol behind etcd, for example). UCP workers are the nodes on which you actually will deploy containerized workloads. Finally, you also have the Docker Trusted Registry (DTR), which can also run in a high-availability mode; DTR provides the registry/repository for storing container images.

Next, Devine moves into discussing security scanning, one of the features of Docker EE. Image scanning actually occurs at the binary level; it’s not just looking at package versions but going through every single file. This functionality works both online (has Internet access) and offline (without Internet access, good for air-gapped installations). Scanning is supported for both Linux (x86_64) and Windows images (with support for IBM z Series in the works).

Under the covers, scanning works along 4 steps. First, it gets the image layers (you can also see this information using docker history). Next, it generates a bill of materials; this is really just a JSON file containing details for the components located within each of the layers DTR found. Also found in the bill of materials are the vulnerabilities associated with the components in the layers.

Image signing (Docker Content Trust) is integrated directly into DTR as well. Devine doesn’t spend much time talking about image signing; instead, he points attendees to a separate session occurring later today (Wednesday 10/18) that focuses on image signing specifically.

The next feature of DTR that Devine discusses is image distribution (comprising image caching, image promotion, and image mirroring [an unreleased feature]). First introduced in DTR 2.2, Docker introduced image caching. DTR 2.3 included image promotion, and image mirroring is something “in the works”.

Devine now provides a few more details about some of these image distribution features:

  • Image caching can be compared to a content distribution network (CDN) for Docker images by caching layers closer to where the layers are being consumed (pulled). It works globally for all repositories in DTR, and preserves access permissions.
  • Image promotion is about moving images between repositories in the same DTR (for example, moving from “dev” to “staging” to “qa” to “production”). Promotion can be done manually, or handled automatically via a policy. Images can be re-tagged as part of the promotion process, and the repositories each have their own access control. Policies can be written to leverage a variety of criteria (tags, vulnerabilities [or lack thereof], package presence or version, size, or even software license). Promotion policies can also be “chained” (promoting from dev to QA and then to production), even support branching out to multiple repositories.
  • Finally, image mirroring is similar to image promotion in that it works on “blessed” images, but this time across registries (promotion is across repositories within a registry). Registries maintain their own access control, and mirroring is bi-directional (supporting push, pull, and webhook-initiated mirroring topologies). Policies can be used to push to remote DTR instances. Image signing and image scanning data will not be preserved across registries. (Devine reminds everyone that this is not yet available in DTR, but is coming soon.)

Devine switches to perform a demo of the new image mirroring functionality. Because the UI/UX for mirroring isn’t yet complete, Devine uses DTR’s RESTful API to configure the mirroring between two DTR instances (for the demo, these are VMs running—under VMware Fusion, it appears—on Devine’s laptop).

Devine closes out the session by letting users know about a free 4 hour demo of Docker EE that’s available, in case attendees are interested in trying out any of the features that were discussed in the session. Following some Q&A, Devine ends the session.

DockerCon EU 2017 Day 2 Keynote

This is a liveblog of the day 2 keynote/general session here in Copenhagen, Denmark, at DockerCon EU 2017. Yesterday’s keynote (see the liveblog here) featured the hotly-anticipated Kubernetes announcement (I shared some thoughts here), so it will be interesting to see what Docker has in store for today’s general session.

At 9:02am, the lights go down and Scott Johnston, COO of Docker (@scottcjohnnston on Twitter), takes the stage. Johnston provides a brief recap of yesterday’s activities, from the keynote to the breakout sessions to the party last night, then dives into content focusing around modernizing traditional applications through partnerships. (If two themes have emerged from this year’s DockerCon EU, they are “Docker is a platform” and “Modernize traditional applications”.) Johnston shares statistics that show 50% of customers have leveraging hybrid cloud as a priority, and that increasing major release frequency is also a priority for enterprise IT organizations. According to Johnston, 79% of customers are saying that increasing software release velocity is a goal for their organizations. Continuing with the statistics, Johnston shows a very familiar set of numbers stating that 80% of the IT spend is on maintenance (I say familiar because these numbers were also used by VMware—and other vendors, no doubt—in years past). This leads Johnston to a discussion of the Modernizing Traditional Applications (MTA) Proof of Concept (POC) Program (the MTA POC Program, or the MTAPOCC—acronyms, anyone?). Based on the results of the MTA POC Program, Johnston shares that Docker (and their partners, who thus far remain unnamed) has been successful in modernizing (containerizing) 100% of applications. He proceeds to share a story of containerizing a 2005-era .NET 2.0-based application successfully in Docker. Johnston states that the program has proven successful in making appliations more “hybrid cloud ready,” increasing software release agaility, and improving the security of applications (via isolation and integry, through digital signing).

Johnston now brings out Ashwini Oruganti and Riyaz Faizullabhoy to do a demo that illustrates some of the concepts that Johnston just discussed. Oruganti and Faizullabhoy walk through a series of security checklist items and show how Docker EE satisfy each of the requirements. The various features include encrypted networks, appropriate role-based access control (RBAC), and digital signatures on container images. Toward the end of the demo, Kristie Howard (from yesterday’s demos) makes a brief appearance, then the demo ends.

Johnston returns the stage following the demo to talk about how Docker’s MTA POC Program offers “revolutionary results” through an “evolutionary approach”. He then brings out Iain Gray and Brandon Royal to talk a bit about Docker’s approach to modernizing traditional applications.

Gray and Royal recap the properties of the MTA approach: incremental, non-disruptive, and customer-driven. The MTA approach starts with a proof-of-concept, followed by quickly putting those early PoC apps into production, and then scaling the applications (and the volume of modernization, I assume) at the customer’s pace.

Diving a bit deeper into the process, Gray reviews that the PoC portion of the MTA approach starts with an assessment step. The assessment step identifies candidate applications (which are typically Linux/Java or Windows/.NET applications) and defining the success criteria. Once candidate applications have been identified and success criteria defined, customers can proceed with containerizing the application, deploying the application, and finally going into a measurement phase. In the measurement phase, it’s about measuring the success criteria and verifying they were satisfied/achieved, as well as building a return-on-investment (ROI) model to show benefits of continuing the MTA process.

Royal takes over now to talk about the second part of the MTA approach (moving the first applications into production), which itself has a number of phases (assessing, containerizing, operationalizing, test and acceptance, going live, and finally measuring/learning/closing the feedback loop). In addition to the different phases listed, this second step of the MTA approach also involves building a foundation for building and operating modernized applications. According to Royal, this involves establishing appropriate governance (internal SLAs, training, documentation), deploying the platform (deploying Docker EE, integrating with other systems), and creating a toolchain (CI/CD pipelines, Content Trust, Security Scanning). Royal repeats that while containerizing the application is important, it’s about building a repeatable process and framework. (In my view, this is the hard part.)

Gray takes over from Royal to talk about the third step in the MTA approach, which involves scaling applications and scaling the MTA process (now that a repeatable framework has been established). Gray indicates that as the MTA process scales within an organization, the foundation Royal discussed will evolve and grow. The final step is innovating at a customer-driven pace; here, Gray mentions things like refactoring applications, moving to a microservices-based architecture, or deploying to the cloud (or hybrid cloud).

At this point, Gray brings out Markus Niskanen and Ocas Renalias (From Finnish Rail and Accenture, respectively) to provide a real-life example of how Finnish Rail has embraced containers and Docker to modernize/containerize their traditional applications. Some of the drivers pushing Finnish Rail were cost, speed, and quality (the same factors that have been mentioned by others numerous times in this keynote presentation).

Returning to the stage, Johnston reviews the importance of partners in the MTA journey. Partners provide guidance, choice, and innovation. This leads to an announcement from Johnston that IBM is joining the MTA Program. Johnston brings out Jason McGee from IBM Cloud Platform to talk more about the value that IBM brings to the Docker MTA Program. McGee spends a few minutes talking about the length and maturity of the relationship between IBM and Docker and the value IBM brings to Docker, the Docker MTA Program, the Docker community, and Docker customers. McGee also shares a few announcements: IBM is now Docker Certified and will have IBM software in the Docker Store; and Docker for IBM Cloud (similar to Docker for AWS or Docker for Azure). Along the way, McGee does a demo of Docker for IBM Cloud. It’s actually a pretty good demo, showing how to integrate applications with IBM services like Watson.

Johnston now comes back to the stage following IBM’s presentation and demo to thank all of Docker’s MTA Program partners. Looking ahead, Johnston says that most of the requests from customers regarding modernizing (containerizing) applications come in two major categories: support for more application types (C/C++, COBOL, etc.), and more automation tools (discovery, dependency mapping, ROI, etc.).

In closing, Johnston reviews some breakout sessions that will provide more information and/or insight on the topics covered in the general session, and then closes the session.

Some Thoughts on the Docker-Kubernetes Announcement

Today at DockerCon EU, Docker announced that the next version of Docker (and its upstream open source project, the Moby Project) will feature integration with Kubernetes (see my liveblog of the day 1 general session). Customers will be able to choose whether they leverage Swarm or Kubernetes for container orchestration. In this post, I’ll share a few thoughts on this move by Docker.

First off, you may find it useful to review some details of the announcement via Docker’s blog post.

Done reviewing the announcement? Here are some thoughts; some of them are mine, some of them are from others around the Internet.

  • It probably goes without saying that this announcement was largely anticipated (see this TechCrunch article, for example). So while the details of how Docker would go about adding Kubernetes support was not clear, many people expected some form of announcement around Kubernetes at the conference. I’m not sure that folks expected this level of integration, or that the integration would take this particular shape/form.
  • In looking back on the announcement and the demos from today’s general session and in thinking about the forces that drove Docker to provide Kubernetes integration, it occurs to me that this almost necessitated the direction/manner of integration. Think about it: had Docker chosen to “separate” Docker and Kubernetes under different software stacks, they would have perpetuated the (perceived) battle between Docker and Kubernetes. However, by positioning Docker “above” Kubernetes, Docker instead emphasizes that the true value of Docker isn’t the runtime (runC or containerD) and it isn’t the orchestration (Swarm and Kubernetes). It is, instead, the developer-friendly workflow, ease of use, and management functionality. It also reinforces Docker as a platform, a message that was definitely hammered home in the general session.
  • Integrating Kubernetes is, in my opinion, just a continuation of the effort that originated in the spin-out of runC to the OCI (and later the donation of containerD to the CNCF)—all these steps are necessary to allow Docker to divorce itself from the perception that “Docker == containers”. (VMware faces a similar challenge, in that folks think “VMware == VMs/hypervisor”.)
  • Naturally, there are questions now regarding what will happen to Swarm. The general consensus (see this post by Nigel Poulton and this post by Laura Frank) is that Swarm—as an orchestration mechanism—isn’t going anywhere anytime soon, and I’d agree with this assessment. At the same time, given the industry momentum around Kubernetes—Rancher’s rebuild on top of Kubernetes is one example, VMware’s joint announcement (with Google and Pivotal) of Pivotal Container Service is another—it’s also fairly apparent that leveraging Kubernetes instead of Swarm is probably a better long-term choice. (It would be interesting to think about how Swarm/SwarmKit might be used to enhance Kubernetes.)

As others have said, this is definitely an interesting time in which to work in the technology field. Look for more liveblogging from DockerCon EU 2017 tomorrow, where I’ll be covering the general session in the morning and as many breakout sessions as I can cram into my schedule.

Recent Posts

Container-Relevant Kernel Developments

This is a liveblog of a Black Belt track session at DockerCon EU in Copenhagen. The session is named “Container-Relevant Kernel Developments,” and the presenter is Tycho Andersen.

Read more...

LinuxKit Deep Dive

This is a liveblog of the DockerCon EU session titled “LinuxKit Deep Dive”. The speakers are Justin Cormack and Rolf Neugebauer, both with Docker, and this session is part of the “Black Belt” track here at DockerCon.

Read more...

Rock Stars, Builders, and Janitors: You're Doing it Wrong

This is a liveblog of the session titled “Rock Stars, Builders, and Janitors: You’re Doing it Wrong”. The speaker is Alice Goldfuss (@alicegoldfuss) from GitHub. This session is part of the “Transform” track at DockerCon; I’m attending it because I think that cultural and operational transformation is key for companies to successfully embrace new technologies like containers and fully maximize the benefits of these technologies. (There’s probably a blog post in that sentence.)

Read more...

DockerCon EU 2017 Day 1 Keynote

This is a liveblog of the day 1 keynote/general session at DockerCon EU 2017 in Copenhagen, Denmark. Prior to the start of the keynote, attendees are “entertained” by occasional clips of some Monty Python-esque production.

Read more...

Technology Short Take 88

Welcome to Technology Short Take #88! Travel is keeping me pretty busy this fall (so much for things slowing down after VMworld EMEA), and this has made it a bit more difficult to stick to my self-imposed biweekly schedule for the Technology Short Takes (heck, I couldn’t even get this one published on Friday!). Sorry about that! Hopefully the irregular schedule is outweighed by the value found in the content I’ve collected for you.

Read more...

Upcoming Spousetivities Events

Long-time readers/followers know that my wife, Crystal, runs a program called Spousetivities. This program organizes events for spouses/partners/significant others at IT industry conferences. This fall is a particularly busy season for Crystal and Spousetivities, as she’ll be organizing events at DockerCon EU, the fall OpenStack Summit, and AWS re:Invent! Here are some details on these upcoming events. DockerCon EU 2017 For the first time, Spousetivities will be present at DockerCon EU, taking place this year in Copenhagen, Denmark.Read more...

Technology Short Take 87

Welcome to Technology Short Take #87! I have a mix of newer and older items for you this time around. While I’m a bit short on links in some areas, hopefully this is outweighed by some good content in other areas. Here’s hoping you find something useful!

Read more...

Some Static Site Resources

Over the last few days—prompted perhaps by my article with some additional information on my site migration—a few folks in the community have reached out to me to share some resources they thought I might find useful. In turn, I’d like to share them with you, my readers, in the event you might find them useful as well.

Read more...

HashiConf 2017 Wrap Up

HashiConf 2017 is a wrap for me, and as I’m sitting here at the airport lounge in Austin I’d thought I’d post links back to the liveblogs I published as well as a few thoughts on the conference overall.

Read more...

Liveblog: Cloud Native Infrastructure

This is a liveblog of the HashiConf 2017 session titled “Cloud Native Infrastructure.” The speaker is Kris Nova, a Senior Developer Advocate at Microsoft. Kris, along with Justin Garrison, authored the O’Reilly Cloud Native Infrastructure book (more information here). As one of the last sessions (if not the last session) I’ll be able to attend, I’m looking forward to this session.

Read more...

HashiConf 2017 Day 2 Keynote

This is a liveblog of the day 2 keynote (general session) at HashiConf 2017 in Austin, TX. Speakers today will (apparently, based on the schedule) include someone from Amazon Web Services and Kelsey Hightower from Google.

Read more...

Liveblog: Terraform Abstractions for Safety and Power

This is a liveblog for the HashiConf 2017 session titled “Terraform Abstractions for Safety and Power.” The speaker is Calvin French-Owen, Founder and co-CTO at Segment.

Read more...

Liveblog: Journey to the Cloud with Packer and Terraform

This is a liveblog of the HashiConf 2017 breakout session titled “Journey to the Cloud with Packer and Terraform,” presented by Nadeem Ahmad, a senior software developer at Box.

Read more...

HashiConf 2017 Day 1 Keynote

This is a liveblog from the day 1 keynote (general session) at HashiConf 2017 in Austin, TX. I’m attending HashiConf this year as an “ordinary attendee” (not working or speaking), and so I’m looking forward to being able to actually sit in on sessions for a change.

Read more...

New Website Features

One of the reasons I migrated this site to Hugo a little over a month ago was that Hugo offered the ability to do things with the site that I couldn’t (easily) do with Jekyll (via GitHub Pages). Over the last few days, I’ve taken advantage of Hugo’s flexibility to add a couple new features to the site.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!