[ANN] capnp-rpc 0.2 - persistence, encryption and access control

capnp
rpc
announce

#1

I’m pleased to announce the release of capnp-rpc 0.2, an OCaml implementation of the Cap’n Proto RPC specification.

If you haven’t used the library before, please see the documentation and tutorial. Cap’n Proto RPC aims to provide secure, efficient, typed communications between multiple parties.

Changes since v0.1

This release brings support for RPC Level 2: persistence, encryption and access control.

The API for implementing services and clients is mostly unchanged, but the APIs for setting up networking are very different. If you followed the tutorial with the 0.1 release, you will probably want to read the new version again from the Networking point onwards.

The main change is that when connecting to a service you now give a URI of the form:

capnp://hash:digest@address/service

The client will connect to address, check the server’s public key matches hash:digest, and then pass the (secret) service ID to get access to a particular service. The server will typically display the URI to use on start-up, or write it to a file. The communications are encrypted using TLS.

If you want to disable TLS, use the form capnp://insecure@address. This should only be needed for interoperability with non-TLS services, as the system will generate keys and certificates automatically, making secure use just as easy as the non-secure case.

The other major new feature is support for persistent services. In version 0.1 you could specify an offer argument when creating a vat, telling it a service to provide in response to bootstrap requests.
Now, you pass a restore argument, which can restore different services depending on the service ID provided by the client.

The new Restorer.Table module provides a table-based lookup restorer, to which services can be added dynamically. If you have a lot of services and don’t want to add them all at startup, you can use Restorer.Table.of_loader and provide your own function for loading services.

Documentation changes

  • The tutorial has been extended and a FAQ added.

  • The recommended layout of protocol files has changed. The Client sub-module is gone, and service becomes local.

  • The examples now have .mli files and there is a new store.ml example demonstrating persistence. The examples have been updated to the new layout convention.

API changes

  • The Capnp_rpc_lwt.Capability module adds some useful new functions:

    • broken creates a broken capability.
    • when_broken allows you to be notified when a capability breaks (e.g. because of a network failure).
    • wait_until_settled waits until a promise has resolved, if you don’t want to pipeline (e.g. you want to send a large amount of data, so prefer to find out where the service is and avoid any forwarding).
    • equal tests if two capabilities designate the same service.
  • The new Capnp_rpc_lwt.Sturdy_ref module provides an abstraction for off-line capabilities. Sturdy_ref.connect can be used to get a live connection. If you try to connect to multiple services in the same vat, it will share a single connection automatically. Sturdy_ref.reader and Sturdy_ref.builder can be used for passing sturdy refs in messages.

  • The new Capnp_rpc_lwt.Restorer module is used to implement sturdy-refs at the hosting side.

  • The new Capnp_rpc_lwt.Persistence module provides support for the Cap’n Proto persistence protocol. Clients use Persistence.save to request a sturdy ref from a service, and services can use Persistence.with_sturdy_ref to answer such requests automatically.

  • The new Capnp_rpc_unix.Vat_config collects together all vat configuration in one place.

  • The new Capnp_rpc_unix.File_store can store Cap’n Proto structs in a directory. It can be useful when implementing persistence.

  • The new Capnp_rpc_lwt.Auth module provides support for generating and handling secret keys and fingerprints.

  • The new Capnp_rpc_lwt.Tls_wrapper provides support for doing TLS handshakes, with authentication and encryption.


#2

I’m unfamiliar with Cap’n Proto RPC but this looks impressive. Two observations:

  • The protocol compiler generates objects which is somewhat rare in modern OCaml code. Are objects simply the best fit for RPC or is there another reason?

  • I noticed somewhat long module names: Capnp_rpc_unix. This is just a matter of taste, but could this be structured as Capnp_rpc.Unix to let the module system do the work?


#3

The protocol compiler generates objects which is somewhat rare in modern OCaml code. Are objects simply the best fit for RPC or is there another reason?

Conceptually, Cap’n Proto is object-oriented, for a few reasons:

  1. You don’t control remote objects, and must assume they are stateful.
  2. In the object-capability model, services are always free to delegate. If you ask for a foo service from Bob, you might get a service hosted at Bob, at Carol or even one you host yourself.
  3. Pipelining means that you can treat a promise of a service like the service itself. In particular, you can invoke methods on it while it’s still unresolved.

Therefore, there will always be dynamic dispatch needed.

Of course, you can still write OO code without using OCaml’s OO features. In this library, I use functional style for the client side, but use the OO syntax for defining services. The main difficulty with OCaml OO is that it often has to infer all the types, which can lead to complex compiler error messages. Here though, the service implementation inherits from a generated base-class with the types, so that isn’t a problem.

For example, the tutorial defines a how to create a local logging service from a plain function roughly like this (simplifying the implementation of the log method):

let local (fn : string -> unit) =
  Api.Service.Callback.local @@ object
    inherit Api.Service.Callback.service

    method log request = fn request.msg
  end

If done with modules instead, it would look something like:

module Callback : Api.Service.Callback.S with
                    type t = (string -> unit) =
  struct
    type t = string -> unit

    let log t request = t request.msg
  end

let local fn =
  Api.Service.Callback.local (module Callback) fn

Either works, but the first seems more natural to me.

Also, the schema language allows inheritance, which we might want to support in the future (the OCaml plugin doesn’t currently support this).

I noticed somewhat long module names: Capnp_rpc_unix. This is just a matter of taste, but could this be structured as Capnp_rpc.Unix to let the module system do the work?

That would put the Unix code in the same package as the core code, which would prevent it from working on non-Unix systems (e.g. Mirage/Xen). Many OCaml libraries are structured like this (e.g. Lwt_unix) for the same reason.

You can, of course, create your own aliases as you please. The tutorial uses the full name in many places because I wanted to be clear about where the modules were coming from.

Note that the Unix module should only be needed in your main.ml to set things up. The rest of the code should use the core library, which I suggest opening.


#4

Hi Thomas,

this sounds very interesting, and has an amazing tutorial. While
looking through the dependency cone, I discovered core_kernel. Is there
a way to use Cap’n Proto and capnp-rpc on MirageOS? If this works, I’d
appreciate an example unikernel :slight_smile:

thanks,

hannes


#5

Is there an incompatibility between core_kernel and mirage? Core_kernel is designed to avoid unixisms, and is used extensively via js_of_ocaml.

Fwiw, Base is a bit lighter weight than Core_kernel, if executable size is a key issue.

y


#6

The non-unix backends to Mirage don’t have an easy cross compilation story for libraries with C bindings at the moment. We’re pretty close to solving this with the switch to jbuilder workspaces, but it isn’t quite there yet. So any dependencies with C bindings need to be specially ported at present, or else there will be unresolved symbols when the unikernel is linked.


#7

@hannes core_kernel is used by the code generator, not the runtime bits. You can see an example of capnp client/server code for the NETWORK signature here:

and

Most of the code could be used in a MirageOS unikernel (apart the Fd and Rawlink modules). Use this if you want to have a remote network interface :slight_smile:


#8

Thanks Thomas!

I can unfortunately not understand how these bits and pieces are built
together into a binary. If there’d be a MirageOS unikernel where I can
see the build steps needed, I’d be much more happy. Shouldn’t take a
lot of time for someone familiar with these code bases. :slight_smile:


#9

@hannes, @samoht unfortunately, there are still a few uses of Core kernel even in the runtime. I think they should be fairly easy to remove though. I’ve removed some already and patches are welcome!

The main problem module is IO, which also depends on Unix. We don’t need that module for anything, so I plan to move it to a separate capnp.unix library.


#10

I’ve now replaced all uses of Core_kernel in the capnp runtime with code using the standard library. Surprisingly, the result is only a few lines longer: https://github.com/capnproto/capnp-ocaml/pull/41 (+186 −171 lines).

However, it still doesn’t work on Xen, due to the uint library having C stubs. There’s a PR adding native uint support to OCaml, but it’s marked as “long-term”, so I’ve asked there for advice on a short-term solution: https://github.com/ocaml/ocaml/pull/1201#issuecomment-333843197