Week 17 2017 - DockerCon 2017

By (5 minutes read)

With Dockercon taking place last week that means today’s note is all Docker with Moby and LinuxKit as the most important items, but I’ll also have a brief look at the Modernize Traditional Apps program and Bitbucket Pipelines' new service container feature.

Moby

Moby is Docker’s new name for the underlying components and framework that is used to build Docker. And this is open source. The idea is that all the existing base components that until now were part of Docker (or separate projects) will become part of Moby. In addition it will also contain a framework for assembling these components into a container platform as well as a reference platform called Moby Origin.

This is not meant to be used by application developers and the like, but the aim is similar to the breaking out of containerd: to have a common ground for different projects. Aside from that it will also be used by Docker as their R&D and experimentation platform.

For us as end-users of Docker’s products (aka the binaries we use as runtime) nothing will change. In a way this is only a renaming of the bottom layer, and the various Docker versions will still be available in the same way. In short the build path of the various components will be:

Moby -> Docker Community Edition -> Docker Enterprise Edition

At each stage some things will be added, but the core functionality will remain the same for any other container platform that uses Moby.

Obviously, I can only see this as a good thing. Looking at various discussions I’ve noticed that there are some teething issues (as always when dependencies are suddenly renamed), but where it comes to having confidence in the future of containers this is a step in the right direction. Whether we’ll suddenly start seeing Docker-compatible alternatives is another matter, but at the very least it means that if Docker the company fails someone else can continue from an existing base.

LinuxKit

LinuxKit is Docker’s solution for running on the various platforms that aren’t Linux. The usual examples of MacOS and Windows fall under this, but the same goes for cloud platforms. The aim of the now open-source LinuxKit is to have an underlying Linux subsystem. As far as I can gather from the description1, it is basically a Linux kernel that can run containers and all services are running in their own sandboxed containers. This makes it more flexible, and lowers any security risks similar to other container specific Linux versions.

Without the chance to try it out myself, I don’t know yet how this will compare to other container specific OSs and if it can be used in the same way or if it needs to be built into something else. If you know more about this, don’t hesitate to mention it in the comments. Otherwise, feel free to go through the source code on GitHub.

Modernize Traditional Apps Program

If you’ve had to deal with existing enterprise applications in the past couple of years, you’ve probably had the feeling that you’d really like to put all of it in a container and be rid of it. Unless I’m the only one, in which case Docker’s new “Modernize Traditional Apps” program is dead in the water.

This is not an open source initiative, but instead a combination of services and Docker EE to help you get your existing legacy2 Java and .NET application into working Docker containers. The aim is to do so in only 5 days as well, which probably makes it a good deal for enterprises as presumably that means it will include best practice regarding security etc. For a process that might otherwise take a far longer time from an internal development team learning to deal with Docker3.

Bitbucket Pipelines Connects to Docker Containers

With Wercker’s acquisition last week, I’m seriously considering moving away from it as my personal4 CICD tool of choice. So when Bitbucket then releases another new update to Pipelines that allows it to connect to service containers, similar to what Wercker can do, that makes me want to take another look at it.

The idea behind these service containers is for example that aside from your regular build container you can spin up a database container to connect to with your integration tests or a web server. This means you don’t have to build these into your build container but instead have a setup that is far more like your production environment where you wouldn’t run your database locally either.

Currently this is limited to 3 service containers (which for most basic uses is enough), and I’m interested in having a look at all of this in the near future. Most of the limitations I found with Pipelines during my initial review last year are still there, but I’ll probably give it a new look after my holiday.


  1. Holiday is great for lots of things, but traveling light and a lack of internet connection (unless I’d stay in the hotel) means that I don’t get to play around with new stuff. ↩︎

  2. Traditional is just as much a euphemism as legacy, but makes it sound even less than the bloatware it often is. ↩︎

  3. Obviously it’s not aimed at experienced Docker users, although I can image they might not want to deal with these legacy applications and would rather hand that problem to someone else. ↩︎

  4. For work I have different solutions, depending on the client. ↩︎

comments powered by Disqus