I’m trying to optimise capnp-rpc. When I check it with perf record etc. I end up with ~50% of the time it spends running, doing garbage collection.
First off is this normal for this kind of application? One thought that I had was that since each of the requests has to traverse the loopback address, that it will persist for a while, and thus may cause pathologically bad behavior for the gc?
Thus is there any way to reduce this kind of behavior, or tuning paramaters to minimise the impact?
50% of time in GC is high. One thing worth pointing out though is that perf profiles based on CPU time rather than wallclock time. That means if the code you’re optimising is doing plenty of IO with only a small amount of other processing then the results you’ll see will skew heavily towards the runtime machinery.
What does the output of time give you when you run your benchmark? That will output the total elapsed time as well as cpu time.
Next up, if it turns out it’s mostly cpu time then you likely want to establish what in the GC is consuming time. What GC functions are taking up the most time based on the perf report?
I’d also dump out the GC statistics using OCAMLRUNPARAM=v=0x400 - that will tell you how many words are allocated, minor and major collections.
The benchmark makes 100,000 requests in parallel. It might be more realistic to limit the number of concurrent requests. e.g. Lwt_stream.of_list ops |> Lwt_stream.iter_n ~max_concurrency:12 (fun v -> v ()).
The default message buffer size in capnp is 8k, which is larger than you need for ping messages. It might be more efficient to use a smaller default (e.g. 100 bytes). There are various places in the code where it would be good to specify better defaults, or expose the default size in the API (for user messages).
There probably is too much allocation going on though, so I’d suggest profiling memory allocations. You might be able to do that using spacetime, or you could try the new memory profiler in OCaml 4.10 (the API for that is hidden by default but it is possible to use it anyway - e.g. see https://github.com/ocurrent/ocurrent/pull/127).
I was trying to see where the buffer size was set. There is a 4k buffer for reads, however I couldn’t see a buffer for writes at the capnp-rpc-net/endpoint.ml level or below (after lunch I’m working my way up from that…).
From the looks of the GC statistics, most of your allocations are going straight into the major heap. This happens for large allocations (over 256 words). Probably worth using some memory profiling to work out where this is happening.
Your bottlenecks from perf (mark and sweep) also indicate time is going in major heap work.
Since this is an RPC application, it might be possible to pool/reuse buffers. That has been extremely effective in the past in many contexts for reducing GC overhead. I would point you at Mark Hayden’s PhD thesis (Ensemble, Cornell, 1997) but also at the enormous work done in Java over the years to pool buffers for this very reason.
I think it would be possible for this benchmark, however in the general case I think each of the messages would be independent. That being said it may be possible to maintain a pool of buffers potentially thus avoiding the gc penalty…
Indeed, I meant pooling the underlying buffers themselves.
As @Chet_Murthy mentioned earlier, many high performance frameworks make aggressive use of buffer pooling. I suggest having a look at Netty’s buffer pool implementation.
So I had a quick look at this last night and it seems all of your allocation going directly to the major heap are coming from this [Capnp.Message.create].
Interesting! I guess if the messages were allocated using a smaller size then they’d go on the minor heap instead, and if the benchmark sent fewer at once then they might never need to be promoted to the major heap.
I will bet that there’s a robust literature on memory-pooling in Java applications, but I don’t really know where I’d look. Especially for a protocol like capnproto, where the entire idea is to not allocate, I think doing something to manage large flat buffers would be “in keeping with the spirit of the system”.
Also, re: “allocate smaller blocks” that really doesn’t help as much as you might think: you’re doing a ton of RPCs in parallel: it’s likely you’ll fill up the minor heap, and then you’ll be stuck again.
It’s been true forever in GCed systems (and, heck,in non-GCed systems) that a significant opportunity for performance improvement lies in “allocation avoidance”. If you avoid allocating memory, you win. It’s still true that too much of that (e.g. by pooling too many different kinds of memory) is also bad (Zorn et al wrote a seminal paper on this IIRC) but stll for the critical datastructure in a system, it’s a valuable approach.
TL;DR I think that a good start for such a library would be to carve out Ensemble’s “iovec” library. But most people would see that as going too far, I fear.
it is difficult to implement such a thing for -any- language, because all the hard parts involve the interaction with the “host system”. That is, how will the pool library recognize that a particular buffer is no longer in use, and can be recycled? It’s everything, really. If we make the assumption that there’s a definite entrypoint (e…g “release()”) that signals that, then sure, it’s easy to create such a generic library. And so, for Java/J2EE, where much of the pooling happens in layers of “I/O streams/readers/writers”, that’s possible, b/c those have well-defined lifetimes (associated with the request/response lifecycle. In this case (of capnproto) that’ll be true also, for similar reasons, except that it will require caveats to users that they must not retain pointers to incoming messages, etc.
Also, if you’re really concerned about this sort of performance, you don’t want those buffers to be in the major heap, either, yes? You’d want them outside the heap completely. Going further, if you’re -really- concerned about performance, e.g. let’s say you’re working with an Infiniband/RoCEE card, you’re going to want to pin those buffers to physical memory and register them with the card and associated kernel library. More complexity.
I did the above with Ensemble’s “iovec” library, and it was quite lovely. Thing is, it’s 'way overkill for most applications, that just don’t need microsecond-level performance.
There’s a “learning” from transaction-processing, which might be useful to you. It is that the graph of throughput-as-a-function-of-presented-concurrency (so: “throughput” on the Y axis, “presented concurrency” on the X axis") starts off a straight line, then flattens out, and then -drops-. This is referred to a the “knee in the curve”. Finding that knee is useful, and can tell you things about your system.
It might be useful to build an automated benchmarker that runs benchmarks with varying parameters, collecting results so you can build such graphs. I certainly found it very, very useful in debugging Java runtime and web-app-server performance, back in the day.
ETA: One other thing: when you find that “knee”, if the drop-off is severe, it always means there’s a bug somewhere. A typical such bug is lack of “admission control”: as early as possible in a TP system, there should be a throttle on admitting trans, so that the internal concurrency of the system never exceeds some known-maximum level. It is never useful in a TP system to admit new work, when in-flight work already consumes all of one or more critical resources.