While experimenting with development and deployment of multiple Mirage OS unikernels across multiple hosts, I quickly discovered that manual orchestration is tedious and doesn’t scale.
The recommended unikernel monitor for kvm.
- Unique to each unikernel with no awareness of other unikernels.
- Doesn’t have a configuration language - just CLI args.
- Doesn’t create required resources (e.g. network devices).
Virtualisation platform (e.g. KVM, Xen) management layer.
- Tediously complex XML config language with few short-cuts.
- Not new-user friendly - requires a lot of reading just to get a single VM (i.e. unikernel) going.
- Does create configured resources if non-existent (e.g. network, pty, block devices).
- Provides many great hypervisor and management features (e.g. networks, network filters, storage)
- Only supports the virtio target.
Neither of these tools make it easy to build, configure and manage multiple unikernels across multiple hosts. Scripting XML or the CLI is obviously the wrong approach.
I searched around, hoping to find an existing Mirage OS orchestration system or something similar but didn’t get very far:
Out-of-date OCaml bindings to libvirt’s C library.
- Major version(s) behind libvirt and doesn’t appear to be in use on any active projects.
- If updated, it would provide the blocks to build a libvirt orchestrator.
- Obviously anything written with these bindings won’t be able to be a MirageOS unikernel itself.
As far as the docs go, Unik provides [for Mirage OS] a basic build wrapper and yaml-based ukvm configuration.
- Has hard dependencies on docker and VirtualBox which seems a bit restrictive.
- Generic instance management functions are limiting.
- Appears to be largely inactive.
An example unikernel platform
I guess what I’d like to see (or develop) is a MirageOS/unikernel platform that provides smart CI/CD and Kubernetes-like orchestration.
- User provides platform a git uri or tarball of unikernel dist
- Platform parses config.ml and determines unikernel requirements (e.g. devices, resources, access)
- Platform knows the capabilities of available hosts
- Platform allows the user to configure within the requirements and available capabilities (e.g. affinity, memory, count)
- Platform builds unikernel for the hypervisor/target supported by a suitable destination host
- Platform configures any environmental changes required by the unikernel or specified by the user (e.g. devices, network filters, DHCP leases)
- Platform deploys the unikernel to the destination host(s)
- Unikernel is now running and managed by platform (e.g. auto-restart, failover to other hosts, auto-discovery, etc)
I appreciate any suggestions, corrections or information on the topic. Thanks for reading.