Previously on #homelab:
I’ve
been a keen user of the
Jenkins
continuous-integration server (build daemon) since the days when it
was called Hudson. I set it up on a shadow IT basis under my desk at
Displaylink, and it was part of Electric Imp’s infrastructure
from the word go. But on the general principle that you don’t
appreciate a thing unless you know what the alternatives are like,
I’ve recently been looking at
Laminar for my homelab CI
setup.
Laminar is a much more opinionated application, mostly in the
positive sense of that term, than Jenkins. If Jenkins (as its icon
suggests) is an obsequious English butler or valet, then Laminar is
the stereotype of the brusque New Yorker: forgot to mark
your $JOB.init script as executable?
“Hey pal. I’m walking here.”
But after struggling occasionally with the complexity and sometimes
opacity of Jenkins (which SSH key is it using?) the simplicity
and humility of Laminar comes as a relief. Run a sequence of
commands? That’s what a shell is for; it’s not Laminar’s job; Laminar
just runs a single script. Run a command on a timer? That’s what
cron (or
anacron)
is for; it’s not Laminar’s job; Laminar provides a trigger command that
you can add to your crontab.
So what does it provide? Mostly sequencing, monitoring,
statistics-gathering, artifact-gathering, and a web UI. (Unlike
Jenkins, the web UI is read-only – but as it exposes the
contents of all packages built using it, it’s still best to keep it
secure.) I have mine set up to, once a week, do a rustup
update and then check that all my projects still build and pass
their tests with the newest nightly build (and beta, and stable, and
the oldest supported version). It’s very satisfying to glance at the
Laminar page and be reassured that everything still builds and
works, even if I’ve been occupied with other things that week. (And
conversely, on the rare occasions when a new nightly breaks
something, at least I find out about it early, as opposed to it
suddenly being in my way at a time when I’m filled with the urge to
be writing some new thing.)
This blog post will cover:
- Installing Laminar
- CI for Chorale, a C++ package
- CI for Cotton, a Rust package
- Setting up Git to build on push
- CI for rustup
You should probably skim at least the first part of the C++ section even
if you’re mostly interested in Rust, as it introduces some basic Laminar
concepts and techniques.
By the way, it’s reasonable to wonder whether, or why, self-hosted
CI is even a thing, considering
that Github
Actions offer free CI for open-source projects (up to a certain,
but very generous, usage limit). One perfectly adequate answer is
that the hobby of #homelab is all about learning how things work
– learning which doesn’t happen if someone else’s cloud service is already
doing all the work. But there are other good answers too: eventually
(but not in this blog post) I’m going to want CI to run tests
on embedded systems, STM32s and RP2040s and the like
– real physical hardware, which is attached to servers here
but very much not attached to Github’s CI
servers. (Emulation
can cover some of this, but not for instance driver work, where the
main source of bugs is probably misconceptions about how the actual
hardware works.) Yet a third reason is trust: for
a released open source project there’s, by definition, no
point in keeping the source secret. But part of the idea of these
blog posts is to document best practices which commercial
organisations, too, might wish to adopt – and they might have
very different opinions on uploading their secret sauce to
third-party services, even ones sworn to secrecy by NDA. And even a
project determined to make the end result open-source, won’t
necessarily be making all their tentative early steps out in the
open. Blame Apple, if you like, for that attitude; blame their habit
of saying, “By the way, also, this unexpected
thing. And you can buy it today!”
1. Installing Laminar
This is the part where it becomes clear that Laminar is quite a
niche choice of CI engine. It is packaged for both Debian and
Ubuntu, but there is a bug in both
the Debian
and Ubuntu
packages – it’s not upstream, it’s in the Debian patchset
– which basically results in nothing working. So you could
either follow the suggestions
in the
upstream bug report of using a
third-party Ubuntu
PPA or the project’s
own binary .deb
releases, or you could do what I did and install the broken
Ubuntu package anyway (to get the laminar user, the systemd
scripts, etc. set up), then build Laminar 1.2
from upstream
sources and install it over the top.
Either way, if you navigate to the Laminar web UI (on port 8080 of
the server) and see even the word “Laminar” on the page,
your installation is working and you’ve avoided the bug.
The default install sets the home directory for
the laminar user to /var/lib/laminar; this is the
Debian standard for system users, but to make things less weird for
some of the tools I’m expecting Laminar to run (e.g., Cargo), I
changed it (in /etc/passwd) to be /home/laminar.
2. CI for Chorale, a C++ package
I use Laminar to do homelab builds
of Chorale, a C++
project comprising a UPnP media-server, plus some related bits and
pieces. For the purposes of this blog post, it’s a fairly typical
C++ program with a typical (and hopefully pretty sensible) Autotools-based build
system.
Laminar is configured, in solid old-school Unix fashion, by a
collection of text files and shell scripts. These all live (in the
default configuration) under /var/lib/laminar/cfg, which
you should probably chown -R to you and also check into a
Git repository, to track changes and keep backups. (The installation
process sets up a user laminar, which should remain a
no-login user.)
All build jobs execute in a context, which allows sophisticated
build setups involving multiple build machines and so on; for my purposes
everything executes in a simple context called local:
/var/lib/laminar/cfg/contexts/local.conf |
EXECUTORS=1
JOBS=* |
This specifies that only one build job can be run at a time (but it
can be any job), overriding Laminar’s default context which
allows for up to six executors: it’s just
a Raspberry
Pi, after all, we don’t want to overstress it.
2.1 C++: building a package
When it comes to the build directories for its build jobs, Laminar
is much more disciplined (or, again, opinionated) than
Jenkins: it keeps a rigid distinction between (a) the build
directory itself, which is temporary, (b) a persistent directory
shared between runs of the same job, the workspace, and (c) a
persistent directory dedicated to each run, the archive. So
the usual pattern is to keep the git checkout in
the workspace (to save re-downloading the whole repo each
time), then each run can do a git pull, make a copy of the
sources into the build directory, do the build (leaving the
workspace with a clean checkout), and finally store its built
artifacts into the archive. All of which is fine except for the
very first build, which needs to do the git
clone. In Laminar this is dealt with by giving the job an
init script (remember to mark it executable!):
/var/lib/laminar/cfg/jobs/chorale.init |
#!/bin/bash -xe
git clone /home/peter/git/chorale.git . |
as well as its normal run script (which also needs to be marked
executable):
/var/lib/laminar/cfg/jobs/chorale.run |
#!/bin/bash -xe
(
flock 200
cd $WORKSPACE/chorale
git pull --rebase
cd -
cp -al $WORKSPACE/chorale chorale
) 200>$WORKSPACE/lock
cd chorale
libtoolize -c
aclocal -I autotools
autoconf
autoheader
echo timestamp > stamp-h.in
./configure
make -j4 release
cp chorale*.tar.bz2 $ARCHIVE/
laminarc queue chorale-package |
There’s a few things going on here, so let’s break it down. The
business with flock is a standard way, suggested
in Laminar’s own
documentation, of arranging that only one job at a time gets to
execute the commands inside the parentheses – this isn’t
necessarily likely to be an issue, as we’ve
set EXECUTORS=1, but git would get in such a
pickle if it happened that it’s a sensible precaution anyway. These
protected commands update the repository from upstream (here, a git
server on the same machine), then copy the sources into the build
directory (via hard-linking, cp’s -l, to save time and
space).
Once that’s done, we can proceed to do the actual build; the
commands from libtoolize as far as make are the
standard sequence for bootstrapping an Autotools-based C++ project
from bare sources. (It’s not
exactly Joel
Test #2 compliant, mostly for Autotools reasons, although at
least any subsequent builds from the same tree would be single-command.)
Chorale, as is standard for C++ packages, is released and
distributed as a source tarball, which in this case is produced by
the release target in the Makefile. The final cp
command copies this tarball to the Laminar archive directory
corresponding to this run of this job. (The archive directory will
have a name like /var/lib/laminar/archive/chorale/33, where
the “33” is a sequential build number.)
The final command, laminarc for “laminar
client”, queues-up the next job in the chain, testing the
contents of the Chorale package. (The bash -xe at the top,
ensures that if the build process produces any errors, the script
will terminate with an error and not get as far as kicking off the
test job.)
That’s all that’s needed to set up a simple C++ build job –
Laminar doesn’t have any concept of registering or enrolling
a job; just the existence of the $JOB.run file is enough
for the job to exist. To run
it (remembering that the web UI is read-only), execute laminarc
queue chorale and you should see the web UI spring into life as
the job gets run. Of course, it will fail if any of the
prerequisites (gcc, make, autoconf, etc.) are missing from the build
machine; add them either manually (sudo apt-get
install ...) or perhaps using Chef, Ansible or
similar. Once the build succeeds (or fails) you can click around in
the web UI to find the logs or perhaps download the finished
tarball.
2.2 C++: running tests
The next job in the chain, chorale-package, tests that the
packaging process was successful (and didn’t leave out any
important files, for instance); it replicates what the user of
Chorale would do after downloading a release. This time the run
script gets the sources not from git, but from the package created
by (the last successful run of) the chorale job, so no init script
is needed:
/var/lib/laminar/cfg/jobs/chorale-package.run |
#!/bin/bash -xe
PACKAGE=/var/lib/laminar/archive/chorale/latest/chorale-*.tar.bz2
tar xf $PACKAGE
cd chorale*
./configure
make -j4 EXTRA_CCFLAGS=-Werror
make -j4 EXTRA_CCFLAGS=-Werror check
laminarc queue chorale-gcc12 chorale-clang |
Like a user of Chorale, the script just untars the package and expects
configure and make to work. The build fails if
that doesn’t happen. This job also runs Chorale’s unit-tests
using make check. This time, we build with the C++
compiler’s -Werror option, to turn all compiler warnings
into hard errors which will fail the build.
If everything passes, it’s clear that everything is fine when using
the standard Ubuntu C++ compiler. The final two jobs, kicked-off
whenever the chorale-package job succeeds, build with
alternative compilers just to get a second opinion on the validity of the
code (and to avoid unpleasant surprises when the standard compiler is
upgraded in subsequent Ubuntu releases):
/var/lib/laminar/cfg/jobs/chorale-gcc12.run |
#!/bin/bash -xe
PACKAGE=/var/lib/laminar/archive/chorale/latest/chorale-*.tar.bz2
tar xf $PACKAGE
GCC_FLAGS="-Werror"
cd chorale*
./configure CC=gcc-12 CXX=g++-12
make -j4 CC=gcc-12 EXTRA_CCFLAGS="$GCC_FLAGS"
make -j4 CC=gcc-12 EXTRA_CCFLAGS="$GCC_FLAGS" GCOV="gcov-12" check |
New compiler releases sometimes introduce new, useful warnings;
this script is a good place to evaluate them before adding them
to configure.ac. Similarly, the chorale-clang job checks
that the sources compile with Clang, a compiler which has often
found issues that G++ misses (and vice versa). Clang also has some
useful extra features, the undefined-behaviour sanitiser and address
sanitiser, which help to detect code which compiles but then can
misbehave at runtime:
/var/lib/laminar/cfg/jobs/chorale-clang.run |
#!/bin/bash -xe
PACKAGE=/var/lib/laminar/archive/chorale/latest/chorale-*.tar.bz2
tar xf $PACKAGE
cd chorale*
./configure CC=clang CXX=clang++
# -fsanitize=thread incompatible with gcov
# -fsanitize=memory needs special libc++
#
for SANE in undefined address ; do
CLANG_FLAGS="-Werror -fsanitize=$SANE -fno-sanitize-recover=all"
make -j4 CC=clang EXTRA_CCFLAGS="$CLANG_FLAGS"
make -j4 CC=clang EXTRA_CCFLAGS="$CLANG_FLAGS" GCOV="llvm-cov gcov" tests
make clean
done |
If the Chorale code passes all of these hurdles, then it’s probably
about as ready-to-release as it’s possible to programmatically
assess.
3. CI for Cotton, a Rust package
All the tools and dependencies required to build a typical C++
package are provided by Ubuntu packages and are system-wide. But
Rust’s build system encourages the use of per-user toolchains
and utilities (as well as per-project dependencies). So
before we do anything else, we need to install Rust for
the laminar user – i.e., as
the laminar user – which requires a moment’s
thought, as we carefully set up laminar to be a no-login
user. So we can’t just su to laminar and run
rustup-init normally; we have to use su to execute one
command at a time from a normal user account.
So start
by downloading
the right rustup-init binary for your system – here, on a
Raspberry Pi, that’s the aarch64-unknown-linux-gnu
one. But then execute it (and then use it to download extra
toolchains) as the laminar user (bearing in mind that
rustup-init’s careful setup of the laminar
user’s $PATH will not be in effect):
$
$
$
$
$
$
|
sudo -u laminar /home/peter/rustup-init
sudo -u laminar /home/laminar/.cargo/bin/rustup toolchain install beta
sudo -u laminar /home/laminar/.cargo/bin/rustup toolchain install nightly
sudo -u laminar /home/laminar/.cargo/bin/rustup toolchain install 1.56
sudo -u laminar /home/laminar/.cargo/bin/rustup +nightly component add llvm-tools-preview
sudo -u laminar /home/laminar/.cargo/bin/cargo install rustfilt |
The standard rustup-init installs the stable toolchain, so
we just need to add beta, nightly,
and 1.56 – that last chosen because it’s
Cotton’s “minimum supported Rust version” (MSRV),
which in turn was selected because it was the first version to
support the 2021 Edition of Rust, and that seemed to be as far back
as it was reasonable to go. We also
install llvm-tools-preview and rustfilt, which
we’ll be using for code-coverage later.
So to the $JOB.run scripts for Cotton. What I did here was
notice that I’ve actually got a few different Rust packages to
build, and they all need basically the same things doing to them. So
I took advantage of the
Laminar /var/lib/laminar/cfg/scripts directory, and made
all the infrastructure common among all the Rust packages. When
running a job, Laminar arranges that the scripts directory
is on the shell’s $PATH (and note that it’s in
the cfg directory, so will be captured and versioned if you
set that up as a Git checkout). This means that, as far as Cotton is
concerned – after an init script that’s really just like
the C++ one:
/var/lib/laminar/cfg/jobs/cotton.init |
#!/bin/bash -xe
git clone /home/peter/git/cotton.git . |
– the other build scripts come in pairs: one that’s
Cotton-specific but really just runs a shared script which is
generic across projects, and then the generic one which does the
actual work. We’ll look at the specific one first:
/var/lib/laminar/cfg/jobs/cotton.run |
#!/bin/bash -xe
BRANCH=${BRANCH-main}
do-checkout cotton $BRANCH
export LAMINAR_REASON="built $BRANCH"
laminarc queue cotton-doc BRANCH=$BRANCH \
cotton-grcov BRANCH=$BRANCH \
cotton-1.56 BRANCH=$BRANCH \
cotton-beta BRANCH=$BRANCH \
cotton-nightly BRANCH=$BRANCH |
The assignment to BRANCH is a Bash-ism which means,
“use the variable $BRANCH if it exists, but if it
doesn’t exist, default to main”. This is
usually what we want (in particular, a plain laminarc queue
cotton will build main), but making it flexible will
come in handy later when we build the Git push hook. All the actual
building is done by the do-checkout script, and then on
success (remembering that bash -xe means the script gets
aborted on any failures) we go on to queue all the downstream
jobs. Note that when parameterising jobs using laminarc’s
VAR=VALUE facility, each VAR applies only to one job, not
to all the jobs named.
The do-checkout script is very like the one for Chorale,
including the flock arrangement to serialise the git
operations, and differing only in that it takes the project and
branch to build as command-line parameters – and of course
includes the usual Rust build commands instead of the C++/Autotools
ones. (This time we can take advantage of
rustup-init’s (Cargo’s) $PATH setup, but only
if we source the environment file directly.)
/var/lib/laminar/cfg/scripts/do-checkout |
#!/bin/bash -xe
PROJECT=$1
BRANCH=$2
# WORKSPACE is predefined by Laminar itself
(
flock 200
cd $WORKSPACE/$PROJECT
git fetch
git checkout $BRANCH
git pull --rebase
cd -
cp -al $WORKSPACE/$PROJECT $PROJECT
) 200>$WORKSPACE/lock
source $HOME/.cargo/env
rustup default stable
cd $PROJECT
cargo build --all-targets
cargo test |
Notice that this job explicitly uses the stable toolchain,
minimising the chance of version-to-version breakage. We also want
to test on beta, nightly, and MSRV though, which
is what three of those downstream jobs are for. Here I’ll just
show the setup for nightly, because the other two are
exactly analogous. Again there’s a pair of scripts; firstly,
there’s the specific one:
/var/lib/laminar/cfg/jobs/cotton-nightly.run |
#!/bin/bash -xe
exec do-buildtest cotton nightly ${BRANCH-main} |
Really not much to see there. All the work, as before, is done in
the generic script, which is parameterised by project and
toolchain:
/var/lib/laminar/cfg/scripts/do-buildtest |
#!/bin/bash -xe
PROJECT=$1
RUST=$2
BRANCH=$3
SOURCE=/var/lib/laminar/run/$PROJECT/workspace
(
flock 200
cd $SOURCE/$PROJECT
git checkout $BRANCH
cd -
cp -al $SOURCE/$PROJECT $PROJECT
) 200>$SOURCE/lock
source $HOME/.cargo/env
rustup default $RUST
cd $PROJECT
cargo build --all-targets --offline
cargo test --offline |
Here we lock the workspace again, just to avoid any potential
clashes with a half-finished git update, but we don’t of
course do another git update – we want to build the same
version of the code that we just built with stable. For
similar reasons, we run Cargo in offline mode, just in case anyone
published a newer version of a dependency since we last built.
That’s the cotton-beta, cotton-nightly, and cotton-1.56
downstream jobs dealt with. There are two more: cotton-doc and
cotton-grcov, which deal with cargo doc and code coverage
respectively. The documentation one is the more straightforward:
/var/lib/laminar/cfg/jobs/cotton-doc.run |
#!/bin/bash -xe
exec do-doc cotton ${BRANCH-main} |
And even the generic script (parameterised by project) is quite simple:
/var/lib/laminar/cfg/scripts/do-doc |
#!/bin/bash -xe
PROJECT=$1
BRANCH=$2
SOURCE=/var/lib/laminar/run/$PROJECT/workspace
(
flock 200
cd $SOURCE/$PROJECT
git checkout $BRANCH
cd -
cp -al $SOURCE/$PROJECT $PROJECT
) 200>$SOURCE/lock
source $HOME/.cargo/env
rustup default stable
cd $PROJECT
cargo doc --no-deps --offline
cp -a target/doc $ARCHIVE |
It much resembles the normal build, except for running cargo
doc instead of a normal build. On completion, though, it copies
the finished documentation into Laminar’s $ARCHIVE
directory, which makes it accessible from Laminar’s web UI
afterwards.
The code-coverage scripts are more involved, largely because I
couldn’t initially get grcov to work, and ended up
switching to using LLVM’s own coverage tools instead. (But the
scripts still have “grcov” in the names.) Once more the
per-project script is simple:
/var/lib/laminar/cfg/jobs/cotton-grcov.run |
#!/bin/bash -xe
exec do-grcov cotton ${BRANCH-main} |
And the generic script does the bulk of it (I cribbed this
recipe from
the rustc book, q.v.; I didn’t come up with it all
myself):
/var/lib/laminar/cfg/scripts/do-grcov |
#!/bin/bash -xe
PROJECT=$1
BRANCH=$2
SOURCE=/var/lib/laminar/run/$PROJECT/workspace
(
flock 200
cd $SOURCE/$PROJECT
git checkout $BRANCH
cd -
cp -al $SOURCE/$PROJECT $PROJECT
) 200>$SOURCE/lock
source $HOME/.cargo/env
rustup default nightly
cd $PROJECT
export RUSTFLAGS="-Cinstrument-coverage"
export LLVM_PROFILE_FILE="$PROJECT-%p-%m.profraw"
cargo test --offline --lib
rustup run nightly llvm-profdata merge -sparse `find . -name '*.profraw'` -o cotton.profdata
rustup run nightly llvm-cov show \
$( \
for file in \
$( \
cargo test --offline --lib --no-run --message-format=json \
| jq -r "select(.profile.test == true) | .filenames[]" \
| grep -v dSYM - \
); \
do \
printf "%s %s " -object $file; \
done \
) \
--instr-profile=cotton.profdata --format=html --output-dir=$ARCHIVE \
--show-line-counts-or-regions --ignore-filename-regex='/.cargo/' \
--ignore-filename-regex='rustc/' |
Honestly? Bit of a mouthful. But it does the job. Notice that the
output directory is set to Laminar’s $ARCHIVE
directory so that, again, the results are viewable through
Laminar’s web UI. (Rust profiling doesn’t produce branch
coverage as such, but “Region coverage” – which
counts what a compiler would call basic blocks –
amounts to much the same thing in practice.) The results will look
a bit like this:
Why yes, that is very good coverage, thank you for
noticing!
4. Setting up Git to build on push
So far in our CI journey, we have plenty of integration,
but it’s not very continuous. What’s needed is
for all this mechanism to swing into action every time new code is
pushed to the (on-prem) Git repositories for Chorale or Cotton.
Fortunately, this is quite straightforward – or, at least,
good inspiration is available online. Pushes to the Git repository
for Cotton can be hooked by adding a script
as hooks/post-receive under the Git server’s
cotton.git directory (the hooks directory is
probably already there). In one of those Git features that at first
makes you think, “this is a bit over-engineered”, but
then makes you realise, “wait, this couldn’t actually be
made any simpler while still working in full generality”, the
Git server passes to this script, on its standard input, a line for
every Git “ref” being pushed – for these purposes,
refs are mostly branches – along with the Git revisions at the
old tip and new tip of the branch.
Laminar comes
with an
example hook which builds every commit on every branch
pushed. I admire this but don’t follow it; it’s great
for preserving bisectability, but seems like it would lead to
a lot of interactive rebasing every time a feature branch
is rebased on a later main – not to mention a lot of
building by the CI server. So the hook I actually use just builds
the tip of each branch:
git/cotton.git/hooks/post-receive |
#!/bin/bash -ex
while read oldrev newrev ref
do
if [ "${ref:0:11}" == "refs/heads/" ];
then
export BRANCH=${ref:11}
export LAMINAR_REASON="git push $BRANCH"
laminarc queue cotton BRANCH=$BRANCH
fi
done |
The LAMINAR_REASON appears in the web UI and indicates
which branch each run is building:
5. CI for rustup
The final piece of the puzzle, at least for today, is the
continuous integration of Rust itself. As new nightlies, and betas,
and even stable toolchains come out, I’d like it to be a
computer’s job, not that of a person, to rebuild everything
with the new version. (Especially if that person would be me.)
This too, however, is straightforward with all the infrastructure put
in place by the rest of this blog post. All that’s needed is a new
job file which runs rustup-update:
/var/lib/laminar/cfg/jobs/rustup-update.run |
#!/bin/bash -ex
export LAMINAR_REASON="rustup update"
source $HOME/.cargo/env
rustup update
laminarc queue cotton assay sparkle |
The rustup update command updates all the toolchains; once
that is done, the script queues-up builds of all the Rust packages I
build. I schedule a weekly build in the small hours of Thursday
morning, using cron:
edit crontab using “crontab -e” |
0 3 * * 4 LAMINAR_REASON="Weekly rustup" laminarc queue rustup-update |
With a bit of luck, this means that by the time I sit down at my
desk on Thursday morning, all the jobs have run and Laminar is
showing a clean bill of health. As I’ve been playing with Rust
for quite a long elapsed time, but really only taking it
seriously in quite occasional bursts of energy, having
everything kept up-to-date automatically during periods when
I’m distracted by a different shiny thing is a real
pleasure.