In part two, I said that the Dockerfile and fig nicely worked on my box in boot2docker.  Given that Docker exists to move things from environment to environment, I anticipated that that would mean the system would now work in our Jenkins on Linux continuous integration environment.  An error on my part…

It turns out that I’d downloaded a version of boot2docker which supplied Docker 1.5.  That’s a very recent release of Docker, it turns out, just released in February.  The Jenkins server is running Docker 1.3, as the CentOS repos it uses don’t yet have newer versions.  Docker 1.3.2 was released last November, with 1.3.1 in October.  I’m not sure sitting here at home which minor version the Jenkins server is running, but what I am sure of is that the behavior of Dockerfile’s WORKDIR changed between the two.

Docker does a good job of making it easy to jump between the documentation supporting different versions.  In 1.3, the documentation for WORKDIR says it sets the working directory for any RUNCMD and ENTRYPOINT instructions that follow it in the Dockerfile.  In 1.5, WORKDIR also supports COPY and ADD.  My Dockerfile used ADD to bring specific files into the container, so that variance in behavior meant that my ENTRYPOINT couldn’t actually find the files I’d moved in.  It took me a while to figure out –  inspecting the docker logs had told me that the ENTRYPOINT wasn’t working, but not a whole lot more.  To debug it, I had to take out my ENTRYPOINT so that I could get the container to start up, and then go in with a `docker run -it -d [IMAGE] bash` to actually go look and see where things were.

Now, once I knew I had a WORKDIR behavior variance, I went looking at the release notes for Docker, from v1.3 through 1.5.  WORKDIR was mentioned, but only in the context of environment variable expansion.  I also went looking on GitHub for issues with information, and didn’t find anything that appeared relevant.  (There are 19 open and 68 closed issues related to WORKDIR, as of this evening.

I mostly spent the time writing this up moreso to catch my debugging path, and perhaps make this easier for someone else to find.  It also makes me consider digging into what support Docker provides for indicating the version of Docker that an image was built against.  We have `docker history` to see the list of commands used to build the layers.  We have `docker info` and `docker version` to pull the information about our own Docker environment.  Somehow annotating what Docker version a file has been tested against, and/or an image has been built with might be a very useful structure, in terms of determining the viability of the Dockerfile in another environment.  Beyond commands changing behavior, there is the very real likelihood that new Dockerfile commands will be added with new Docker releases.  Having some kind of concept of these match perfectly versus these might not match would be a good place to work.

Note: I did do a check out on registry.hub.docker.com to see how they handle support for differing versions of Docker. The official distributions of mongoCentOS and redis tag their releases and provide links to their Dockerfiles, but just generally state ‘

This image is officially supported on Docker version 1.5.0.

Support for older versions (down to 1.0) is provided on a best-effort basis.’

I did use the mongo official release as part of my efforts: it itself seemed to do fine across 1.3 versus 1.5.  A quick peek into its Dockerfile shows me that I wouldn’t have tripped the WORKDIR issue, at least unless its debian:wheezy base image caused an issue.

Note: further poking shows that someone added an issue for Docker file version information about a year ago (March 29, 2014) in issue #4907.  Happily for my cause, it was tagged by a software engineer at Docker as a proposed feature just two weeks ago.

 

So, my last post ended with the statement that ‘having a non-breaking build is a wonderful thing’.  Too bad it wasn’t quite that easy…

After verifying that my Dockerfile ran in my boot2docker environment, I merrily checked in my changes to the base build, pressed ‘Build’ on Jenkins, and waited for the build success message to appear in our team chat room.  No luck – build failed.

OK, it turns out that our build is actually controlled by a bash script which calls fig and docker.  No worries – will just test it out in boot2docker, as I really don’t like build break messages coming over the wire.  Here’s where the problems started…

1. First step: do the fig commands in our build.sh work in boot2docker?

boot2docker for OSX comes prebuilt with fig.  boot2docker for Windows doesn’t, at least as of issue 603, which is still open for boot2docker 1.5.  fig is now Docker compose, and I found a cheat to add an alias for fig which under the covers makes use of a Docker container for fig.

2 . Second step: run the build.sh itself..

It turns out that ./build.sh doesn’t work in boot2docker.  That’s because boot2docker’s underlying Linux distribution, Tiny Core, doesn’t have bash installed.  Turns out to be solveable using tce-load, ala ‘tce-load -wi bash.tcz’

3. OK, now why does the build.sh still not work?

I have fig aliased and it’ll run from the command-line (fig up works…), bash now is available, so why am I still getting told it can’t recognize ‘fig’?  Turns out that the alias won’t work within the script itself and I needed to hack the script to source my .bash_aliases.  Bleah.

4. It works on my box!

The container builds, it starts up, and my simple curl test works well, after replacing localhost with the result of boot2docker ip to get my VM’s IP address.  Docker and fig exist to make things work very portably, so I check in and…  all’s not well.  More for part tres.