Just realized a theme key for me in how I browse the internet, with my ten Chrome tabs open.  Twitter is like Pinterest is like Facebook, in that in each case I’m getting a stream of ideas or images or information that are related to my interests or people I’m interested in.  No one theme or person fills that pipeline overwhelmingly, which is how I enjoy it.

Instagram is like LinkedIn is like reading a particular RSS feed: they have their uses, but are not among my preferred areas to spend a lot of time.  Too much is coming from the same authors or points of view.  I don’t get that ah hah! moment of the choice of many rabbit trails.

In both worlds (creative chaos vs. stuck on point), they’re merely an entry point to see if I’d like to figure out more, either through the links on that site or through my own hunts off to the side.

 

I’ve been on a Douglas Adams kick lately.  His birthday celebration recently caused me to look up a bit of his quotable stuff…  a few below.

 

A learning experience is one of those things that says, ‘You know that thing you just did? Don’t do that.

 

We are stuck with technology when what we really want is just stuff that works.

 

You’re paid a lot and you’re not happy, so the first thing you do is buy stuff that you don’t want or need—for which you need more money.

 

These are all from a book called ‘The Salmon of Doubt‘, which was published posthumously.  To discover a new Douglas Adams book with such quotable items in it – pure delight.

Digging around, I further found that Douglas Adams was once a writer for Dr. Who, and that apparently the 3rd book (‘Life, the Universe and Everything’) of ‘The Hitchhiker’s Guide to the Galaxy’ was originally intended to be a Dr. Who story.  MORE delight.

I now quote the prologue to ‘The Salmon of Doubt’, from the words of Douglas Adams describing himself: “I wanted to be a writer-performer like the Pythons.  In fact I wanted to be John Cleese and it took me some time to realise that the job was in fact taken.”   I wish I had known this gentleman.

 

The Go programming language has the distinct advantage of compiling into a binary, which means I can compile for a particular target OS, bring my compiled binary onto my machine and it “just works”.   Except when it doesn’t.  In this case, it didn’t as we made HTTPS calls.  The handshake between the two servers was failing, with the obtuse statement of ‘remote error: handshake failure‘.  To the Go team’s credit, apparently all they get back from the server is that the handshake failed.  No further detail, nothing to help track down root cause.  I’ve spent two days in curl and openssl, trying to figure out why a GET which worked in curl didn’t work from either our Go code or via an openssl invocation.

Takeaways:

* openssl gives much more information about the SSL handshake than does curl, even with -v

* I have a greater appreciation of Java’s ability to work in enterprise (i.e. legacy) systems.  Never really had to worry about this in Java.

* Go finally did the job, when given enough information about TLS settings.  We got to stay away from ciphers, thankfully, though did a brief exposure to the number of cyphers in existence, as we investigated that potential path.

I’ll end up posting a snippet somewhere in our corporate environment about how to make 2-way SSL handshake actually work there, given what we discovered.  Turns out, Go makes some assumptions that just don’t hold true in certain environments.  Thankfully, I broke the case before ending up in Wireshark land.  openssl s_client did the job…  Corporate doesn’t really like sniffers being deployed on the network….

 

 

In part two, I said that the Dockerfile and fig nicely worked on my box in boot2docker.  Given that Docker exists to move things from environment to environment, I anticipated that that would mean the system would now work in our Jenkins on Linux continuous integration environment.  An error on my part…

It turns out that I’d downloaded a version of boot2docker which supplied Docker 1.5.  That’s a very recent release of Docker, it turns out, just released in February.  The Jenkins server is running Docker 1.3, as the CentOS repos it uses don’t yet have newer versions.  Docker 1.3.2 was released last November, with 1.3.1 in October.  I’m not sure sitting here at home which minor version the Jenkins server is running, but what I am sure of is that the behavior of Dockerfile’s WORKDIR changed between the two.

Docker does a good job of making it easy to jump between the documentation supporting different versions.  In 1.3, the documentation for WORKDIR says it sets the working directory for any RUNCMD and ENTRYPOINT instructions that follow it in the Dockerfile.  In 1.5, WORKDIR also supports COPY and ADD.  My Dockerfile used ADD to bring specific files into the container, so that variance in behavior meant that my ENTRYPOINT couldn’t actually find the files I’d moved in.  It took me a while to figure out –  inspecting the docker logs had told me that the ENTRYPOINT wasn’t working, but not a whole lot more.  To debug it, I had to take out my ENTRYPOINT so that I could get the container to start up, and then go in with a `docker run -it -d [IMAGE] bash` to actually go look and see where things were.

Now, once I knew I had a WORKDIR behavior variance, I went looking at the release notes for Docker, from v1.3 through 1.5.  WORKDIR was mentioned, but only in the context of environment variable expansion.  I also went looking on GitHub for issues with information, and didn’t find anything that appeared relevant.  (There are 19 open and 68 closed issues related to WORKDIR, as of this evening.

I mostly spent the time writing this up moreso to catch my debugging path, and perhaps make this easier for someone else to find.  It also makes me consider digging into what support Docker provides for indicating the version of Docker that an image was built against.  We have `docker history` to see the list of commands used to build the layers.  We have `docker info` and `docker version` to pull the information about our own Docker environment.  Somehow annotating what Docker version a file has been tested against, and/or an image has been built with might be a very useful structure, in terms of determining the viability of the Dockerfile in another environment.  Beyond commands changing behavior, there is the very real likelihood that new Dockerfile commands will be added with new Docker releases.  Having some kind of concept of these match perfectly versus these might not match would be a good place to work.

Note: I did do a check out on registry.hub.docker.com to see how they handle support for differing versions of Docker. The official distributions of mongoCentOS and redis tag their releases and provide links to their Dockerfiles, but just generally state ‘

This image is officially supported on Docker version 1.5.0.

Support for older versions (down to 1.0) is provided on a best-effort basis.’

I did use the mongo official release as part of my efforts: it itself seemed to do fine across 1.3 versus 1.5.  A quick peek into its Dockerfile shows me that I wouldn’t have tripped the WORKDIR issue, at least unless its debian:wheezy base image caused an issue.

Note: further poking shows that someone added an issue for Docker file version information about a year ago (March 29, 2014) in issue #4907.  Happily for my cause, it was tagged by a software engineer at Docker as a proposed feature just two weeks ago.

 

So, my last post ended with the statement that ‘having a non-breaking build is a wonderful thing’.  Too bad it wasn’t quite that easy…

After verifying that my Dockerfile ran in my boot2docker environment, I merrily checked in my changes to the base build, pressed ‘Build’ on Jenkins, and waited for the build success message to appear in our team chat room.  No luck – build failed.

OK, it turns out that our build is actually controlled by a bash script which calls fig and docker.  No worries – will just test it out in boot2docker, as I really don’t like build break messages coming over the wire.  Here’s where the problems started…

1. First step: do the fig commands in our build.sh work in boot2docker?

boot2docker for OSX comes prebuilt with fig.  boot2docker for Windows doesn’t, at least as of issue 603, which is still open for boot2docker 1.5.  fig is now Docker compose, and I found a cheat to add an alias for fig which under the covers makes use of a Docker container for fig.

2 . Second step: run the build.sh itself..

It turns out that ./build.sh doesn’t work in boot2docker.  That’s because boot2docker’s underlying Linux distribution, Tiny Core, doesn’t have bash installed.  Turns out to be solveable using tce-load, ala ‘tce-load -wi bash.tcz’

3. OK, now why does the build.sh still not work?

I have fig aliased and it’ll run from the command-line (fig up works…), bash now is available, so why am I still getting told it can’t recognize ‘fig’?  Turns out that the alias won’t work within the script itself and I needed to hack the script to source my .bash_aliases.  Bleah.

4. It works on my box!

The container builds, it starts up, and my simple curl test works well, after replacing localhost with the result of boot2docker ip to get my VM’s IP address.  Docker and fig exist to make things work very portably, so I check in and…  all’s not well.  More for part tres.

I bounce back and forth between a Windows laptop and a CentOS workstation for my current project.  Now I’m working with Docker, which exists to containerize applications and let them be bundled or at least specifically describe what environment they need. Docker then makes that environment work, regardless of whether it’s on my development box, a test box, or a production box.  No more “but it needs version ‘foo’ to work properly and our system only has version ‘bar’ installed….”  However, in my CentOS environment Docker can work naturally directly, whereas in my Windows environment, it needs an intermediary to get to Docker goodness.  That intermediary is boot2docker, which exists for both Windows and OSX.

Once I’ve got boot2docker installed and am in, it’s now time to do a docker build.  In my case, I’m building against a local Dockerfile which exists on my laptop.  However, that file and its supporting resources all live outside of the context of the VM boot2docker spins up within which to run docker.  How could I mount a drive to share the files in?

The approach listed in boot2docker’s README.md to mount a drive didn’t work for me.  Or, more properly, I was able to set up a mounted drive.  It just wasn’t one that I could write to from my development box. I could look in it, but not write to it. Unsure why.  However, apparently relatively recently boot2docker added the ability to go to /c/users – see the first comment on this post, which described a _different_ way to map drives.  (I looked at 3.)  The boot2docker default mapping doesn’t look at the rest of c.  Just /c/users.  Thankfully, my stuff already lived in a nested directory below.

Mission for the night accomplished: Docker file refactored to run with a different base image, one that assumes everything’s pre-built rather than does magic things to have a Maven build happen in the Docker container.  That makes the world much nicer for things like new unit-tests that depend on having a Mongo database around…  Too bad all of that wasn’t in my sprint task list.  But having a non-breaking build is a wonderful thing.

A local women in technology group sent out a message that another group is looking for speakers for an Internet of Things Event.  Said the group was very interested in making sure that women were visible at the front of the room.  I count that an admirable thing, given the relative lack of women in the computer science field : having visible folks from any ‘identifiable’ group makes it easier for members of that ‘identifiable’ group to project themselves into that role, which makes it more likely that the field will grow in diversity and perspectives.  Note that ‘identifiable’ group could mean gender, race, ethnicity, economic stratus, etc – if you can identify with it from seeing or hearing someone, it’s an identifiable group.  (Note that under my definition, that would mean that that doesn’t require that that speaker identify that way, just that other folks categorize them as like themselves.  But that’s a side note.).

The challenge for me: after getting information from the organizer, I wasn’t sure my background and expertise quite fit what they were looking for.  Now, there’s an oft-raised concern of women that we back away because we feel like we’re not quite good enough for the challenge, that we’re ‘impostors‘ who don’t give enough credit to our own accomplishments.  In my case, I had done a single IoT project, which had never gone into production use.  Does it count as a ‘Thing’ (my capital T) or a prototype thing?   The group itself is focused on Microsoft .NET technologies, which doesn’t mean that they don’t do other stuff, but it does mean coming in with a concept of how my IoT [T][t]hing compared or contrasted with the Microsoft approach would make me a more informed speaker.  Frankly, I didn’t want to do that much delving.  I’d have been happy to reshape my particular talk and update it a bit, but I didn’t want to the person at the end of the panel feeling like they were the weakest link.

So, I ultimately passed.  I’d rather dedicate geek prep time to building an IoT effort #2, or working on an open-source project, or exploring the ins and outs of containerization technologies or Javascript frameworks or PaaS or NoSQL database interaction patterns.  But I sure hope another female IoTer steps up and is visible…  going to kick myself later if the panel still ends up being only men.