I made some small amount of progress on yesterday’s yaks. I have almost got to the point where think that the tests work on both OS X and Linux; I just need to get some time on a Mac to be sure.

The experience so far has made me realise how powerful it is to have uniform build and test infrastructure. At Google, an improvement as small as I’m making would take not much longer than it takes to write the code. Everything is in one reository, and all tests are run with the same tool. As a result, you simply make your change, run the tests, and then send it for review.

The project I’m changing has a test suite. For local testing, you run either py.test to run with the current Python environment, or tox to run a collection of environments. In the CI build, the tests are run with py.test under a set of environments which defined by Travis CI’s infrastructure.

This difference between local tests and tests under CI is quite frustrating, especially given that the CI runs take many minutes to get scheduled and run. I can get to the Works On My Machine™ state, but have the CI build fail because the environment is different. To say I spent most of yesterday and today refreshing CI build pages wouldn’t be too inaccurate!

Even more, Travis CI recently changed its default from virtualized infrastructure to containerized infrastructure. The differences seem to be:

  • containerized infrastructure schedules and runs quicker
  • containerized infrastructure won’t run setuid programs such as sudo, which is used in this project’s environment set up.

However, they only applied this new default to repositories created after some cutoff date.1 This meant that the exact same commit that passes in the upstream CI build would fail in a new clone.2 Certainly not intuitive!

I’m really interested to see how different projects approach this problem. Having such a uniform infrastructure is a luxury that not many can afford!



  1. The attentive reader may wonder about the sudo_detected method call there: shouldn’t that have prevented this from being a problem? Sadly, it looks as though this detection simply looks at invocations directly specified in .travis.yml, and makes no attempt to find indirect invocations.

  2. I’m not even going into how at Google, all projects would have been tested with and without the new configuration. It would then be fairly easy to send an automated change updating the configuration to any project that was newly failing with the new CI configuration.