Modern Jenkins Unit 2 / Part 5: Starting Jenkins with Docker Compose

“The Good Ole’ Days”

Trollface

Back in aught eight when I was a kid, the way we deployed complex services was a 1000 line shell script that was neither idempotent nor checked into SCM. It just sat there at an http endpoint, ready for sudo | bashing (I guess normally sudo wasn’t an issue as we ran as root :P). If it needed a tweak, you could just ssh to the server, fire up pico, make the change, deploy your stuff, sit back and relax while wondering why the rest of the team is complaining about deploys not working. After all, it Works on My Machine :)

While I look back with fondness at the days of yore, I can only imagine it is the fresh Colorado air that is making me forget how crappy it is to have one deploy work and then literally the same thing 5 minutes later fails because someone was mucking with the script. So we’re not going to do that.

Docker Compose

Instead, we are going to use something called Docker Compose. Docker Compose is a project by Docker that was based on something called fig a long time ago. Unlike the rest of their toolkit, docker-compose is a Python application that uses YAML to describe a service or set of services. It allows you to define pretty much every aspect of how the services are run, what the networking and storage systems will look like, and to fine tune how your app will work via environment variables.

There is a ton of info out there on Docker Compose 1 so please do take a peek. For now, let’s roll forward into the unknown and create our first compose file.


deploy/master/docker-compose.yml

---
# deploy/master/docker-compose.yml
# Define the version of the compose file we're using
version: '3.3'

# Define our services
services:
  # Jenkins master's configuration
  master:
    image: modernjenkins/jenkins-master
    ports:
      - "8080:8080"
    volumes:
      - plugins:/usr/share/jenkins/ref/plugins
      - warfile:/usr/share/jenkins/ref/warfile

  # Jenkins plugins' configuration
  plugins:
    image: modernjenkins/jenkins-plugins
    volumes:
      - plugins:/usr/share/jenkins/ref/plugins
      - warfile:/usr/share/jenkins/ref/warfile

# Define named volumes. These are what we use to share the data from one
# container to another, thereby making our jenkins.war and plugins available
volumes:
  plugins:
  warfile:

A compose file is made up of a few sections as in the example above. Here the ones we’re using:

  • version 2: Define what version of compose file this is

  • services 3: This is where we list out all of the services that we need running. This example is fairly straightforward, but it is possible to include any service your app needs in this section. You’re basically describing the full system and it’s interactions.

  • volumes 4: This is where data storage is described. We’re using it to define two volumes, one for plugins and one for the warfile. Upon creating this volume, data from the container will be copied in. Since the first container does not have anything at that path, the data from the second container is what we get which is exactly what we want.

  • networks 5: Not used here, but a way to define all container networking.

This is a fairly simple example of a compose file so it should be fairly straightforward to understand. You may also notice that it’s very succinct and to the point while still being super readable. This is why I like Docker Compose. We can describe something extremely complex (not so much in this case) as an easy to read YAML file.

Test it out

Ok, here’ we go girls and boys. The big reveal. Our rocket-powered-motorcycle is fueled up and we’re ready to jump the Snake river!


PWD: ~/code/modern-jenkins

# Compose up
cd deploy/master
docker-compose up -d
docker-compose logs -f

The Jenkins app should be starting up now and once it says “Jenkins is fully up and running” you should be able to browse to the UI at http://localhost:8080 and bask in its Janky glory.

Now that we know how to start / stop it, we should add this to the documentation. It is important to keep these docs up to date so that anyone can jump in and start using it without having to do a massive amount of research. Let’s add this to the README:


deploy/README.md

# Deployment Scripts

Jenkins is deployed via Docker Compose. In order to run the service locally, use
the following commands:

```
# Get into the deploy directory
cd deploy/master

# Start the service as a daemon
docker-compose up -d

# View logs
docker-compose logs -f

# Stop Jenkins
docker-compose down -v

# Pull new images
docker-compose pull
```

WTF Matt, a 6 part blog series to replace “java -jar jenkins.war” and 6 clicks?

hahaha, well you got me there! JK. While java -jar jenkins.war and a few mouse clicks could get us to the same place, it would not have been wasting nearly enough bits :trollface:

Crazy like Fox News

Obviously there are two potential reasons why we did this:

  1. I am a crazy person
  2. I am a just-crazy-enough person

Luckily for you, the answer is the latter. If you’ll recall, the whole reason I’m writing this is because I’m tired of people showing me their ugly Jenkins and encouraging me to tell them how cute it is.

The problem with most of these monstrosities is not that they don’t build the software. If it didn’t do that it wouldn’t exist at all. The problem is that they are extremely hard, time consuming, dangerous, and unfun to work on.

Spaghetti Code

That’s fine for something that never changes, but as it turns out we’re making things for the internet which is growing and changing constantly meaning that we constantly need to change and adapt in order to move forward. This applies very much so to our build system. It is a system that eventually everyone in the company begins to rely on, from C levels that need profits to PMs and POs who need features, to Engineers who need to do the actual work.

When a CI system turns into a bowl of spaghetti each little necessary change becomes a nerve-racking-afterhours-signed-out-of-slack maintenance that gives us all PTSD after the 30th time it goes south. What we are doing here is implementing a semi-rigid structure for our system to basically manage change effectively while still moving fast.

Jenkins Oops

Let’s walk through some of the common Crappy Times at Jenkins High:

Cincinnati Time Waste

  • A plugin with a broken dependency: Instead of finding out after checking the ‘restart Jenkins when done’ that a plugin can’t fully resolve it’s dependencies, we will see it when we try to build the Docker image. It is still non-optimal that it’s happening, but it is not a prod outage and builds are still running, preventing a Cincinnati Time Waste tournament.

  • Rolling credentials for the CI Git user: In the old days, this required a ton of coordination in addition to an outage. We have not yet showed it, but when your secrets are tied to the container we are able to modify all the required credentials, roll the system, and get back at it.

  • A job that broke for “no reason”: It’s always unpleasant to be surprised by a job that is just no longer going green. When we version all of our jobs, plugins, and master configuration, bisecting what caused a failure (or behavior change) becomes much simpler. We just go back to the point in which the job was running and diff the environment to what is running today. Since we’re able to run everything locally it should be a fairly straightforward process to replicate the problem on your laptop and lower your MTTR.

All of these problems we are talking about are still going to occur, but what we’re doing is pushing the problems down to build time from runtime. We want to find these issues in advance where they are not causing outages. We want to be able to treat our pipelines, configuration, and infrastructure as code to avoid the bowl of spaghetti that is fragile and unkown in nature. The teams should not be called “10ft Pole” (my old team) that help with the build system, they should be called “Golden Retriever Puppies” because everyone wants to play with us.

Golden Retriever Puppies

In conclusion

In conclusion, I hope you are able to see how the beginnings of our system are going to lend themselves to being a fully scalable solution that can scale to hundreds of builds, thousands of developers, and at least 10s of different companies you’re going to work at :)

If you don’t see it quite yet then you’re going to have to trust me that we are indeed doing this work for something and not for nothing. Anyways, no skin off of my nose if you don’t. Just keep typing code monkey.

In the next unit of this series we will begin configuring Jenkins. This will allow you to begin making Jenkins do the stuff you need it to do. Stay tuned for Unit 3 of Modern Jenkins: Programmatically Configuring Jenkins for Immutability with Groovy.

The repo from this section can be found under the unit2-part5 tag here: https://github.com/technolo-g/modern-jenkins/tree/unit2-part5

Next Post: The Jenkins Groovy init system (init.groovy.d)

Modern Jenkins Unit 2 / Part 4: The Jenkins Plugin Image

The plugins image

Jenkins Plugins

You may have noticed that while we called the previous image jenkins-master, we never did drop the war in it. In fact, the only reference to that war we’ve seen is the very base image which sets a version, path, and a checksum. What’s the reason for this madness?

The answer is that the images we have built up until now are only a runtime environment for this image. The master image (the one we just built) will almost never change. When doing an upgrade the war never has new system requirements and rarely changes the directory structure or anything like that.

Jenkins Plugins Contents

What does change from deployment to deployment is the set of plugins, version of the Jenkins war, and the configuration that interacts with those things. For this reason I choose to run a vanilla Jenkins master container (with a few environment variable configs passed in) and a highly customized plugin container. This plugin container is where the binaries live and is volume mounted by the master to provide the software itself.

Let’s create it now and we can talk more about it after.


images/jenkins-plugins/Dockerfile

# images/jenkins-plugins/Dockerfile
FROM modernjenkins/jenkins-base
MAINTAINER matt@notevenremotelydorky

LABEL dockerfile_location=https://github.com/technolo-g/modern-jenkins/tree/master/images/jenkins-plugins/Dockerfile \
      image_name=modernjenkins/jenkins-plugins \
      base_image=modernjenkins/jenkins-base

# Add our plugin installation tool. Can be found here and is modified from the
# upstream version.
# https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-plugins/files/install-plugins.sh
ADD files/install-plugins.sh /usr/local/bin/

# Download the Jenkins war
# JENKINS_URL, JENKINS_ROOT, JENKINS_WAR, and JENKINS_SHA are set in the parent
RUN mkdir -p ${JENKINS_ROOT}/ref/warfile \
  && curl -fsSL ${JENKINS_URL} -o ${JENKINS_WAR} \
  && echo "${JENKINS_SHA}  ${JENKINS_WAR}" | sha256sum -c - \
  && chown -R ${user}:${user} ${JENKINS_ROOT}

# We will run all of this as the jenkins user as is dictated by the base imge
USER ${user}

# Install our base set of plugins and their depdendencies that are listed in
# plugins.txt
ADD files/plugins.txt /tmp/plugins-main.txt
RUN install-plugins.sh `cat /tmp/plugins-main.txt`

# Export our war and plugin set as volumes
VOLUME /usr/share/jenkins/ref/plugins
VOLUME /usr/share/jenkins/ref/warfile

# It's easy to get confused when just a volume is being used, so let's just keep
# the container alive for clarity. This entrypoint will keep the container
# running for... infinity!
ENTRYPOINT ["sleep", "infinity"]

You can see from the Dockerfile that this image is where the action is. We have a similar set of metadata at the top like the other images, then we add a file named install-plugins.sh This file is from the upstream Jenkins Docker image and it’s purpose is to install a set of plugins as well as any depdendencies they have. It can be downloaded from the link provided in the Dockerfile 1.

Then we go on to download the jenkins war and check it’s SHA. If the SHA does not match what we have in the base image, this step will fail and you know that something amiss is going on. Since the version and the SHA are both set in the very base image they should always match. There is never a scenario in which those two do not match up.

SHA 256 Sum

Once the war and tools are installed we can install our set of plugins. The install-plugins.sh script needs the war to run so now we should be ready. What this script is doing in the background is interacting with the Jenkins Update Center to attempt to install each plugin that is listed in plugins.txt. It will reach out to download the plugin and check for any depdendencies the plugin may have. If there are any, it will download those, resolve transitive deps and so on until the full set of plugins defined by us are installed along with any deps they need to function.

NOTE: This is different than the plugins.sh file that is out there. That script will not resolve dependencies and makes it very hard to audit which plugins you actually need.


PWD: ~/code/modern-jenkins/

# Add plugin resolver
cd images/jenkins-plugins
mkdir -p files/
wget -O files/install-plugins.sh \
  https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-plugins/files/install-plugins.sh
chmod +x files/install-plugins.sh

# Add a very base set of plugins to plugins.txt
# Add some credential storage
echo "credentials" >> files/plugins.txt
# Enable GitHub interactions
echo "github" >> files/plugins.txt
# Make our blue balls green
echo "greenballs" >> files/plugins.txt
# Give us Groovy capabilities
echo "groovy" >> files/plugins.txt

If you recall we discussed that this image is only going to provide the software itself and the Jenkins master image will provde the runtime. How that works is that we will export our plugins and warfile via the VOLUME statements at the bottom of this Dockerfile and mount them into the master via --volumes-from. This makes our plugins image an fully contained and versionable bundle of the master war and any plugins we need. A little later on, we will talk about how to include your configuration as well.

Finally we have the ENTRYPOINT. This version is farily simple: sleep infinity. What this does is keeps the container running even though we do not have a running process in it. Since this is only a container for our wars and JPIs, it doesn’t need to run the JVM or anything like that. It only needs to provide it’s exported volumes. If we were to omit the ENTRYPOINT, everything would still work as expected but for the fact that the jenkins-plugin container would not be running.

It would appear to be in a stopped state which for me is very confusing. The container is being used by the master (by way of volumes) and so it is indeed in use. The fact that Docker shows it as stopped is misleading IMO and so this just props up the container for clarity.

Building the image

Well, we’ve got another image to build and I think by this time you know what we’re going to do and it’s not DRY out our builders :P


PWD: ~/code/modern-jenkins/

# Warm up the copy machine...
cd images/jenkins-plugins
cp -rp ../jenkins-master/build.sh .
perl -pi -e 's~jenkins-master~jenkins-plugins~g' build.sh

# Build the image
./build.sh
# yay! I worked on the first time :trollface:

Testing the image

As us rafters say, the proof is at the put-in. Let’s give it a whirl!


PWD: ~/code/modern-jenkins

# Start the plugins container first
docker container run --name plugins -d modernjenkins/jenkins-plugins

# Now fire up the master
docker container run --rm -ti --volumes-from=plugins -p 8080:8080 \
  modernjenkins/jenkins-master
  
# Open the GUI
open http://localhost:8080

Jenkins Home

Would you look at that? Jenkins seems to be starting up swimmingly! If it is not for you, try to debug what exactly is going wrong. There are a lot of moving parts and this is a fairly complex system so don’t feel bad. It happens to all of us. Except us writing blog tutorials. We are always 100% right and our instructions work 11/10 times so you’re wrong and you should feel bad :P Seriously though, if something is jacked up in these instructions please use your super PR skills and help a brotha out by submitting a PR to the repo.


Unicorn Cleanup

Unicorn from: http://sperlingsmaedchen.deviantart.com/art/unicorns-fart-rainbows-381339815


Cleaning up

After running tests like these, we definitely need to begin thinking about cleanup. What would happen if we tried to run the same tests again right now? Feel free to try it, but the secret is that it won’t work. We need to delete the remnants from the previous test before starting another so I make it a habit to ensure a clean environmnet before I run a test and attempt to cleanup afterwards. The command I normally use for this is the equivalent of “nuke it ‘till it glows”: docker container rm -fv $(docker ps -qa). This little gem will remove all containers, running or not as well as any volumes they may have created (you may want to read more about that, volumes not in the state you thought they were can ruin your day in lots of ways).

One other thing you may be noticing is that no matter how diligent you are, you’re developing a stack of <none> images, weirdly named volumes, and orphaned networks. This is normal cruft left behind while doing Docker development and it can be removed by using docker system prune . This will remove:

  • all stopped containers
  • all volumes not used by at least one container
  • all networks not used by at least one container
  • all dangling images ('s)

NOTE: If you really want to clean up, add a -a and it will also remove images not attached to a running container. I find that to be annoying except when we’re in prod, but it is handy there.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
 
(~) ------------------------------------------------------------------------- 🐳  unset (matt.bajor)
% docker rm -fv $(docker ps -qa)
34b9692447f6
59b24f290270
(~) ------------------------------------------------------------------------- 🐳  unset (matt.bajor)
% docker system prune
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N] y
Deleted Networks:
master_default

Deleted Images:
untagged: modernjenkins/jenkins-master@sha256:8f4b3bcad8f8aa3a26da394ce0075c631d311ece10cf7c23ce60058a9e47f6ed
deleted: sha256:96c78f549467f8b4697b73eddd9da299d8fd686696b45190a2bba24ad810529a
deleted: sha256:d1f38cb683287825bbf1856efdfaa87e2a7c279ceb793f9831b88b850ae1c9a0
deleted: sha256:5371c45cef2d3c5c468aae4fd5e93c335e8e681f2aa366f6122902c45e8ec9cb
deleted: sha256:079be452ec3e99b51a35b76e67b1bb3af649c3357e3ba05d2b6bd2a8127804b4
deleted: sha256:87baad26b39521ddd0d7b12ac46b2f92344f2f8ad34f0f35c524d5c0c566b409
deleted: sha256:c348763948964e1f63c427bea6b4d38c3a34403b61aee5e7b32059a3c095af32
deleted: sha256:6f92439bdac179e8c980dc6a7eb4f9647545e9c6d34d28edbba3c922efa9ea1e
deleted: sha256:edd5cbd4dc3cb3e9ab54bb1d7f446d5638c4543f04f2b63ae1a3e87a661be7a2
deleted: sha256:7890def677cf6649567c4355ef8f10c359f71c0ac9ca6ab94d8f359a5d57f84d
deleted: sha256:2704ec820811576ee2c60b8a660753939457f88fbe6938c2039489a6047ec59c
deleted: sha256:202acc3c794ce58a5e0b0e6b3285ab5ae27c641804c905a50b9ca7d5c601b2b3
deleted: sha256:70e19603643ce03f9cbff3a8837f1ebfb33fe13df7fba66c2501be96d9a2fb93
deleted: sha256:8e757cb858613c81e5fa8fb2426d22584539c163ce4ab66d6b77bd378ee2817a
deleted: sha256:18d1a064d790f3be371fef00813efe1c78996eab042977b952f4cbf067b846e8
deleted: sha256:bddcbf75436ff49e435fe3c371337b6b12ae125e68e0d833ac6180ffd82f34d9
deleted: sha256:f4dae60dcb2542e532eb05c94abba2da00d5a36360cb1d79cb32f87bf9b9c909
deleted: sha256:12f7c2589fdbb6e8b9ac78983511df70e9613c8da42edf23ee1cdb3599437233
deleted: sha256:26b155d41fabd6881f871945586c623a485688fc67f08223df288522f7aeed87
deleted: sha256:3a7c393698419b8f4f7a1464264459d2662d9015b3d577ad8cb12e0b4ae069a5
deleted: sha256:53794a3680b75ae98f70ab567db86a1b0e262615a8194bad534edfd5d8acc2f2
deleted: sha256:13449dedb3ec5df1f1b969aa2f1f93bb2a3bed2fb3ebe7279cce52b750696031
deleted: sha256:55aae84cda94b4611f73ec70b4cc1ea7ce4bbb77a1999b585fcc46c6239fe2a5
deleted: sha256:b41674288931c4e4bcd43e9fcc0d0af8d9ddd9a31f04506050ce0f0dfc59e3e3

Total reclaimed space: 313.9MB

Commit, push, PR

You know the drill. Integrate early, integrate often. Make sure you actually are looking at the work you’re merging. Afterall, it has your public name on it twice.

If you did get lost (I know I had to make a minor change to my base image), take a look at the unit2-part4 tag here: https://github.com/technolo-g/modern-jenkins/tree/unit2-part4

Next Post: Starting Jenkins with Docker Compose

Modern Jenkins Unit 2 / Part 3: Building the Jenkins Master Image

NOTE: Make sure you’re checking out a branch at the beginning of each section!

Building our master image

Jenkins Master

Now that we have a good base to inherit from, we can begin building out the rest of our images inheriting from that one. The next image we need is for the master. This image won’t contain too much other than generic configuration and a couple tools because we want our master image itself to be as generic as possible. The customization of each provisioned Jenkins master consists of configuration and plugins which we will package in a separate image. We will talk more about why it’s broken down this way later on. For now, let’s take a look at what we have for a Jenkins master image (modernjenkins/jenkins-master):


images/jenkins-master/Dockerfile

# images/jenkins-master/Dockerfile
FROM modernjenkins/jenkins-base
MAINTAINER matt@notevenremotelydorky

LABEL dockerfile_location=https://github.com/technolo-g/modern-jenkins/tree/master/images/jenkins-master/Dockerfile \
      image_name=modernjenkins/jenkins-master \
      base_image=modernjenkins/jenkins-base

# Jenkins' Environment
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log

# `/usr/share/jenkins/ref/` contains all reference configuration we want 
# to set on a fresh new installation. Use it to bundle additional plugins 
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d

# # Disable the upgrade banner & admin pw (we will add one later)
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state \
    && echo 2.0 > ${JENKINS_HOME}/jenkins.install.InstallUtil.lastExecVersion

# Fix up permissions
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref

# Install our start script and make it executable
# This script can be downloaded from
# https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-master/files/jenkins.sh
COPY files/jenkins.sh /usr/local/bin/jenkins.sh
RUN chown jenkins /usr/local/bin/* && chmod +x /usr/local/bin/*

# Make our jobs dir ready for a volume. This is where job histories
# are stored and we are going to use volumes to persist them
RUN mkdir -p ${JENKINS_HOME}/jobs && chown ${user}:${group} ${JENKINS_HOME}/jobs

# Install Docker (for docker-slaves plugin)
RUN yum-config-manager --add-repo \
      https://download.docker.com/linux/centos/docker-ce.repo \
    && yum makecache fast \
    && yum install -y docker-ce \
    && yum clean all -y

# Switch to the Jenkins user from now own
USER ${user}

# Configure Git
RUN git config --global user.email "jenkins@cicd.life" \
    && git config --global user.name "CI/CD LIfe Jenkins"

# Main web interface and JNLP slaves
EXPOSE 8080 50000
ENTRYPOINT ["/usr/local/bin/jenkins.sh"]

Looking at this Dockerfile, you may see a few new things like USER (will run the commands after this declaration as the defined user) and EXPOSE (exposes defined ports for binding to an outside port), but for the most part it’s very similar to the previous one. Set a few ENV vars, RUN a few commands etc.

We need a build script so we’ll do the same thing that we did before (except now we have the script in our repo) by creating a build.sh that can also push. Let’s just duplicate this now:


PWD: ~/code/modern-jenkins/

cd images/jenkins-master
cp -rp ../jenkins-base/build.sh .
perl -pi -e 's~jenkins-base~jenkins-master~g' build.sh

Now we have a nice little build script for this image too. While a puppy might have died when we copy/pasta’d I didn’t hear it whimper.

There is one more file that we need for this image and it’s the startup script. Since the internet was generous enough to provide one, we should just use it. This is the script that powers the official image and I’ve got a copy of it just for you in my repo. To retrieve it, use wget:


PWD: ~/code/modern-jenkins/

cd images/jenkins-master
mkdir files
wget -O files/jenkins.sh \
  https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-master/files/jenkins.sh
chmod +x files/jenkins.sh

Build the image and test it out

Now that we’ve got all the files created that our image depends on, let’s build and test it a bit.


PWD: ~/code/modern-jenkins/

# Build it
cd images/jenkins-master
./build.sh

# Run it
docker container run --rm -ti modernjenkins/jenkins-master bash
docker version

# You should see the Docker client version only

Commit, push, PR

The master image seems to be gtg so let’s get it integrated. You may now be seeing what we mean by ‘continuous integration’. Every time we have a small chunk of usable work, we integrate it into the master branch. This keeps change sets small and makes it easier for everyone to incorporate the steady stream of changes into their work without spending days in Git hell.

You can compare your git tree to mine at state at the unit2-part3 tag here: https://github.com/technolo-g/modern-jenkins/tree/unit2-part3 The Docker images are also available to pull if you don’t feel like building them for some reason.

Our next move will be to build the meat of our system: the plugins container. Awwww Yeaahhhhh

Next Post: Building the Jenkins Plugin image