“The Good Ole’ Days”
Back in aught eight when I was a kid, the way we deployed complex services was a
1000 line shell script that was neither idempotent nor checked into SCM. It just
sat there at an http endpoint, ready for sudo | bash
ing (I guess normally sudo
wasn’t an issue as we ran as root :P). If it needed a tweak, you could just ssh
to the server, fire up pico, make the change, deploy your stuff, sit back and
relax while wondering why the rest of the team is complaining about deploys not
working. After all, it Works on My Machine :)
While I look back with fondness at the days of yore, I can only imagine it is the fresh Colorado air that is making me forget how crappy it is to have one deploy work and then literally the same thing 5 minutes later fails because someone was mucking with the script. So we’re not going to do that.
Instead, we are going to use something called Docker Compose. Docker Compose is
a project by Docker that was based on something called fig
a long time ago.
Unlike the rest of their toolkit, docker-compose is a Python application that
uses YAML to describe a service or set of services. It allows you to define
pretty much every aspect of how the services are run, what the networking and
storage systems will look like, and to fine tune how your app will work via
environment variables.
There is a ton of info out there on Docker Compose 1 so please do take a peek. For now, let’s roll forward into the unknown and create our first compose file.
deploy/master/docker-compose.yml
A compose file is made up of a few sections as in the example above. Here the ones we’re using:
-
version
2: Define what version of compose file this is -
services
3: This is where we list out all of the services that we need running. This example is fairly straightforward, but it is possible to include any service your app needs in this section. You’re basically describing the full system and it’s interactions. -
volumes
4: This is where data storage is described. We’re using it to define two volumes, one for plugins and one for the warfile. Upon creating this volume, data from the container will be copied in. Since the first container does not have anything at that path, the data from the second container is what we get which is exactly what we want. -
networks
5: Not used here, but a way to define all container networking.
This is a fairly simple example of a compose file so it should be fairly straightforward to understand. You may also notice that it’s very succinct and to the point while still being super readable. This is why I like Docker Compose. We can describe something extremely complex (not so much in this case) as an easy to read YAML file.
Test it out
Ok, here’ we go girls and boys. The big reveal. Our rocket-powered-motorcycle is fueled up and we’re ready to jump the Snake river!
PWD: ~/code/modern-jenkins
The Jenkins app should be starting up now and once it says “Jenkins is fully up and running” you should be able to browse to the UI at http://localhost:8080 and bask in its Janky glory.
Now that we know how to start / stop it, we should add this to the documentation. It is important to keep these docs up to date so that anyone can jump in and start using it without having to do a massive amount of research. Let’s add this to the README:
deploy/README.md
WTF Matt, a 6 part blog series to replace “java -jar jenkins.war” and 6 clicks?
hahaha, well you got me there! JK. While java -jar jenkins.war
and a few mouse
clicks could get us to the same place, it would not have been wasting nearly
enough bits :trollface:
Obviously there are two potential reasons why we did this:
- I am a crazy person
- I am a just-crazy-enough person
Luckily for you, the answer is the latter. If you’ll recall, the whole reason I’m writing this is because I’m tired of people showing me their ugly Jenkins and encouraging me to tell them how cute it is.
The problem with most of these monstrosities is not that they don’t build the software. If it didn’t do that it wouldn’t exist at all. The problem is that they are extremely hard, time consuming, dangerous, and unfun to work on.
That’s fine for something that never changes, but as it turns out we’re making things for the internet which is growing and changing constantly meaning that we constantly need to change and adapt in order to move forward. This applies very much so to our build system. It is a system that eventually everyone in the company begins to rely on, from C levels that need profits to PMs and POs who need features, to Engineers who need to do the actual work.
When a CI system turns into a bowl of spaghetti each little necessary change becomes a nerve-racking-afterhours-signed-out-of-slack maintenance that gives us all PTSD after the 30th time it goes south. What we are doing here is implementing a semi-rigid structure for our system to basically manage change effectively while still moving fast.
Let’s walk through some of the common Crappy Times at Jenkins High:
-
A plugin with a broken dependency: Instead of finding out after checking the ‘restart Jenkins when done’ that a plugin can’t fully resolve it’s dependencies, we will see it when we try to build the Docker image. It is still non-optimal that it’s happening, but it is not a prod outage and builds are still running, preventing a Cincinnati Time Waste tournament.
-
Rolling credentials for the CI Git user: In the old days, this required a ton of coordination in addition to an outage. We have not yet showed it, but when your secrets are tied to the container we are able to modify all the required credentials, roll the system, and get back at it.
-
A job that broke for “no reason”: It’s always unpleasant to be surprised by a job that is just no longer going green. When we version all of our jobs, plugins, and master configuration, bisecting what caused a failure (or behavior change) becomes much simpler. We just go back to the point in which the job was running and diff the environment to what is running today. Since we’re able to run everything locally it should be a fairly straightforward process to replicate the problem on your laptop and lower your MTTR.
All of these problems we are talking about are still going to occur, but what we’re doing is pushing the problems down to build time from runtime. We want to find these issues in advance where they are not causing outages. We want to be able to treat our pipelines, configuration, and infrastructure as code to avoid the bowl of spaghetti that is fragile and unkown in nature. The teams should not be called “10ft Pole” (my old team) that help with the build system, they should be called “Golden Retriever Puppies” because everyone wants to play with us.
In conclusion
In conclusion, I hope you are able to see how the beginnings of our system are going to lend themselves to being a fully scalable solution that can scale to hundreds of builds, thousands of developers, and at least 10s of different companies you’re going to work at :)
If you don’t see it quite yet then you’re going to have to trust me that we are indeed doing this work for something and not for nothing. Anyways, no skin off of my nose if you don’t. Just keep typing code monkey.
In the next unit of this series we will begin configuring Jenkins. This will allow you to begin making Jenkins do the stuff you need it to do. Stay tuned for Unit 3 of Modern Jenkins: Programmatically Configuring Jenkins for Immutability with Groovy.
The repo from this section can be found under the unit2-part5
tag here:
https://github.com/technolo-g/modern-jenkins/tree/unit2-part5