I’m pleased to announce the first release of OCluster. A user can submit a build job (either a Dockerfile or an OBuilder spec) to the scheduler, which then runs the build on a worker machine, streaming the logs back to the client.
This is the build scheduler / cluster manager that we use for e.g. opam-repo-ci (which you may have seen in action if you submitted a package to opam-repository recently).
See ocurrent/overview for a quick overview of the various other CI services using it too.
To install and run the scheduler use e.g.
opam depext -i ocluster mkdir capnp-secrets ocluster-scheduler \ --capnp-secret-key-file=./capnp-secrets/key.pem \ --capnp-listen-address=tcp:0.0.0.0:9000 \ --capnp-public-address=tcp:127.0.0.1:9000 \ --state-dir=/var/lib/ocluster-scheduler \ --pools=linux-arm32,linux-x86_64
It will generate
key.pem on the first run, as well as various capability files granting access for workers and clients. You then copy each generated pool capability (e.g.
pool-linux-x86_64.cap) to each machine you want in that pool, and run
ocluster-worker pool-linux-x86_64.cap to start the worker agent. See the README for full details.
OBuilder is an alternative to
docker build. The main differences are that it takes a spec in S-expression format, which is easier to generate than a Dockerfile, handles concurrent builds reliably, and keeps copies of the logs so that you still see the output even if someone else performed the same build step earlier and the result is therefore taken from the cache.
It currently supports ZFS and Btrfs for storage (it needs cheap snapshots) and
runc for sandboxing builds. macos support is under development, but not yet upstreamed. It should be fairly easy to add support for any platform that has some form of secure chroot.
OCluster supports monitoring with Prometheus, so you can see what the cluster is doing: