Getting OCaml Webmachine onto Tech Empower benchmarks - Caqti losing queries?

Hi! So first of all I’m an OCaml beginner with no previous experience in asynchronous programming. I’m struggling a bit with wrapping my head around the correct way to use Lwt. Both of these things have led to code quality going out the window. Any feedback would be very welcome.

There is currently not a single web framework written in OCaml showing up on the Tech Empower benchmark suite so I set out to add one. I ended up going with Webmachine because I liked the name, very scientific I know. I followed the recommendations and implemented the easiest test cases. You can find my implementation here. Unfortunately the “high” concurrency test fails on 2 out 3 test cases.

I have added some rudimentary printf logging as well as postgresql logging. This is what I’m seeing:

Unexpected result from <postgresql://benchmarkdbuser:_@tfb-database:5432/hello_world?connect_timeout=15>: Received 0 tuples, expected one. Query: "SELECT id, randomNumber FROM World WHERE id = $1".174

I believe the 174 at the end is the id being queried for which I know exists in the table. Postgresql logs no errors.

Any suggestions on how I would move forward with my debugging?

If someone would like to try this out you may execute the following:

git clone https://github.com/rbjorklin/FrameworkBenchmarks.git
cd FrameworkBenchmarks
./tfb --mode debug --test webmachine

And then in a new window call: curl -H "Content-Type: application/json" http://localhost:8080/queries/500

which consistently fails on the 4 request for me.

The versions I’m using are all documented in the dune-project file.

4 Likes

So it turns out that Random.int:

returns a random integer between 0 (inclusive) and bound (exclusive).

and the initialization process in the test suite does not create an entry for 0.

I have made progress and now have an open PR to get ocaml-webmachine on to the results board: Add ocaml webmachine by rbjorklin · Pull Request #6041 · TechEmpower/FrameworkBenchmarks · GitHub

6 Likes

My PR was accepted and the first round of results should be available Sunday Wednesday morning here: https://tfb-status.techempower.com/

6 Likes

That’s great news, thank you for this effort! :smiley: I’ve been watching TechEmpower for a few years and this will be a great start to finally getting OCaml in there. I can’t wait to see how it compares to Go/NodeJS and maybe even Java/C++.

1 Like

Did something go wrong? I can’t see it in the results from October 1st or 5th, not even as a failure.

This is the link you’re going to want to follow. However tests seem to be run from A-Z meaning webmachine will be among the last to run. Give it another day or so and the results should be showing up.

2 Likes

it seems that there are results now,
though they are not particularly good :slight_smile: - it is slower in most categories than JS’s fastify and in JS parsing it is even 6x slower (possibly Ezjsonm);

I wonder if it is because of cohttp, for example the latest opium uses httpaf and got better results (though still not satisfactory), but I thought it’s because it was running a single process: OCaml web server run multiple processes

I would gladly add a new benchmark with opium and httpaf, but trying to understand your PR:


I haven’t used haproxy is the config standard/requirement for the benchmarks?
If I were to add a new benchmark perhaps it makes sense to keep it and replace only frameworks/OCaml/webmachine/src/tfb.ml, docker files and etc.?
1 Like

Hi @mudrz,

Yeah the performance isn’t great, part of that could most likely be attributed to me not caching the time and therefore forcing a system call to gettimeofday on every request. While running the benchmarks locally I see 25-30% system time.

Please do not replace my webmachine implementation with opium, instead follow the instructions on how to add a new framework. You can find information regarding the tests and their requirements here. Regarding haproxy it is not standard in these tests, in fact most implementations with single threaded execution put their implementation behind nginx. Feel free to use whichever you’re more comfortable with.

As for your question around running a single process that’s where haproxy (or nginx) comes into play, the start-servers.sh script in my PR spins up one process per logical core. You can see the results for one process vs many behind haproxy by filtering on ocaml. Looking at the numbers something definitely looks off with the scalability since 1 process produces 12k req/s and 28 processes produces 95k req/s.

2 Likes

Looking at the numbers something definitely looks off with the scalability since 1 process produces 12k req/s and 28 processes produces 95k req/s.

I could be wrong, but you might have to install conf-libev (the system package is libev-devel on fedora) so lwt can use the libev engine that’ll leverage epoll instead of select. That might improve the performance a bit compared to what you see at the moment.

5 Likes

I could be wrong, but you might have to install conf-libev (the system package is libev-devel on fedora) so lwt can use the libev engine that’ll leverage epoll instead of select . That might improve the performance a bit compared to what you see at the moment.

I tried this locally and unfortunately it didn’t make a difference. That could be caused by resource starvation on my machine though as running the benchmark locally means you’re running the client, db and server. I will still create a PR with your suggested changes and try to get it upstreamed. The results won’t be in until late Thursday the 15th though.

AFAICT most of the submissions don’t include the Date header which seems a little strange.

@rbjorklin , managed to submit a PR for Opium, feel free to pop in and make improvements

I am using your haproxy configuration, so that the servers implementations can be closer to allow easier comparison

some changes that can be moved to the webmachine dir: the docker layers allow for incremental work without re-installing dependencies (by first copying the opam file and installing dependencies and then copying the code and building), added a date implementation in the format they require

3 Likes

hmm weird, locally opam exec -- dune build --profile release @install creates an executable and the tests pass, but in the ci test the file is not found: _build/default/bin/main.exe: No such file or directory

I am using your haproxy configuration, so that the servers implementations can be closer to allow easier comparison.
Some changes that can be moved to the webmachine dir: the docker layers allow for incremental work without re-installing dependencies, added a date implementation in the format they require

This sounds great! I will retro-fit your changes onto the webmachine implementation. Thanks!

I’m happy to say you’re correct and my initial results were indeed due to resource starvation. I have created a new PR using libev as well as some other tweaks that improves performance by 3x on my local machine.

6 Likes

There is now fresh data for both Webmachine and Opium to be viewed here: https://www.techempower.com/benchmarks/#section=test&runid=bf3eceff-da94-46a8-87ab-6fc3ef39c12c

@mudrz Something seems to be off with Opium’s plaintext implementation, you might want to have a look.

wow, nice timing :slight_smile:

there isn’t much in terms of the plaintext call within the benchmark implementation, but there might be something else wrong

I posted this since I really hope that there is something fundamentally wrong and the performance can dramatically improve (I’ve been wanting to compare OCaml native to using BuckleScript/ReScript for a server since I’m more familiar with the node world)