On the contrary, the fact that WebAssembly does not provide any GC makes it much easier for GC-based languages to be ported to it. Indeed, their runtimes do not have to be modified to accommodate any peculiarity of WebAssembly’s nonexistent GC; they can be ported as is (assuming they already support a 32-bit address space). For example, OCaml’s GC will work out of the box. The only issue is that it cannot give back memory to the host; but that is not specific to OCaml. That is a WebAssembly defect that plagues all the programming languages, be they garbage-collected or not.
Wait, what? So e.g. in Rust any allocated memory is not returned to the system even after Rust deallocates it?
Correct, see: Wasm needs a better memory management story · Issue #1397 · WebAssembly/design · GitHub and Freeing/Shrinking memory · Issue #1300 · WebAssembly/design · GitHub
This is a big reason why the most successful applications of WASM (like shopify’s third-party extension facility) demand the use of languages (like assemblyscript) that have fairly constrained runtimes that a typical e.g. OCaml or Javascript programmer would find daunting for many tasks.
Shachar’s ocaml-wasm
works very well, at least for our use case which is a reasonably complex OCaml program [Coq]
So you’re actively using coq via ocaml-wasm? Is that setup public, do you have any writeups of the experience, etc?
Yup, we didn’t make it yet the “official” release, but it has been used by quite a few people to avoid lack of tail-call optimization is jsoo , live versions:
It literally flies.
I guess @corwin-of-amber is the right person to comment more on his superb efforts.
Hi there @camarick; ocaml-wasm is very much bleeding-edge but it already works surprisingly well and I have used it to run Coq, esp. for the purpose of making the interactive version of Vols. I,II from the Software Foundations textbook (see https://jscoq.github.io/ext/sf and https://jscoq.github.io/ext/sf/tools/jscoq-tester.html).
Of course @ejgallego is exaggerating when he says that it flies, it still runs OCaml bytecode in interpreted mode on top of the WASM JIT. Performance is pretty reasonable still, except in the case some intensive Coq tactics (in which case this is a third level of interpreter… ). The main gap right now is the standard libraries str
, unix
, and threads
, for which I have compiled empty stubs, because dynamic loading of libraries in WASI is still immature. I have been able to compile num
and it works correctly because it does not depend on anything else. I am currently investigating how to build zarith
(which requires gmp
) because Coq 8.13 depends on it.
So yeah, this is not at all the coveted WASM backend for ocamlc
, but it’s one existing solution and you can hack on it right now. Any help or comments are welcome!
Thanks for the details.
Could you expand on how you FFI with the browser APIs in that setting ? E.g. is there a reasonable path to get to use my favourite OCaml browser bindings ?
I am actually not doing anything fancy and just use what the WebAssembly runtime in the browser provides. The JS side can interact with the WASM side in two main ways: imports and exports.
- Exports are C functions that are made visible from the WASM instance. Once the WASM is loaded (“instantiated”) they can be called like any JS function in the form
wasm.exports.func(...args)
. Inocamlrun
, the runtime API is exported. So I can call e.g.caml_alloc_string
,caml_callback
, etc. (see here). It is a tad low-level, but usually you just call OCaml code once and then let it run until it exits. - Imports are the reverse: JS functions that you bind unto C symbols that the WASM executable expects to link against. The
libc
implementation is provided that way. What you can do in OCaml is define primitives and provide their implementation in JS when loading the WASM. Remember that your primitives will be called withvalue
-typed arguments from the runtime API… you might have to use API calls (viawasm.exports
) to interpret them.
So basically if I want to call say alert("hey!")
from a WASM OCaml I need a C stub for crossing from OCaml to C and then a C import of JavaScript alert
that the stub calls.
Unless we get a good generic representation of the JavaScript values and runtime in C to be able to boil down the process to a few generic primitives, that looks like a rather unconvenient prospect.
(Aside I spent quite a bit of time staring at the browser API standards last summer and it struck me that the IDL in which they are defined is actually much more C-like and precise than the JavaScript types they end up being mapped onto.
Some of the browsers APIs are not too bad and bring you interesting functionality. It could be nice to have them as C library APIs to code against for native OS functionality. If those could then be exposed directly in your WASM to represent and affect the browser functionality you would then effectively have bridged the gap between native and browser JavaScript apps. Hand-waves)
That was indeed what I did in wacoq-bin, but perhaps there is an easier way in which you would not have to write any C code. I will have to look more into how ocamlrun
resolves primitives to figure out.
I have since improved dynamic binding support in wasi-kernel, which allows to bind to JS functions as well without needing the C stub to cross the border between OCaml and C. That is, the stub exists, but it is part of wasi-kernel and is introduced on demand.
This is how this mechanism was used in waCoq.
These glue binders are generated by this script: https://github.com/corwin-of-amber/wasi-kernel/blob/master/scripts/autogen-delegation.js
.
Direct link because the server only allows two links
This approach seems to be very maintainable.
How is its performance compared to jsoo?
It is unfortunately not very impressive. Loading objects (i.e. Dynlink’ing .cma
s) is much faster, whereas executing the code itself is pretty much on par; sometimes faster, sometimes slower. The V8 JIT is extremely good with tight loops, and the WASM runtime is not there yet, so if you have such loops in your code, you can expect the interpreted version to be slower.
What about other JS engines, like the one in firefox (spider monkey? I
don’t remember) ? Not everyone has or wants V8
In my experience, Firefox tends to be slower. It will be interesting to benchmark at some point. The lightweight JS’s have no JIT so you can expect them to be the slowest. Android browsers are also weak on that front because of the memory consumption overhead that JIT incurs.
Web Assembly GC Proposal has moved to Stage 2 according to this tweet. One step closer to a solid OCaml → WASM compiler.