It’s slower than accessing a record field because the offset of a particular method is not known statically (it depends on how the object was built). There is some caching done, but not as aggressive/optimized than in OO-heavy languages.
It’s hard to answer this question in isolation. It depends a lot on the specific use-case (inheritance or not, “closed” object or not, etc.), and on how critical the method-lookup performance are in the broader application. The performance overhead compared to record field lookup is large (I’d guess at least 10x in many cases), but in most codebases it’s only a neglectible part of the runtime. (I think in particular in the context of Eio, performance was not really a relevant argument).
For more precise information, you should run benchmarks ![]()
My rule of thumb is that if objects are used heavily, you are going to see some cost (to be weighted against the benefits of this programming style, in particular the extensibility/openness of row types), otherwise there is probably nothing to worry about (just look at the profiles).
For example: the visitors package uses objects to implement AST traversals. If visitors-derived traversal on large objects are the critical path of your application, you will probably observe object performance. If you just use it in passing, it’s fine. For example if we used visitors in the compiler, it would probably be fine, because structural mapping/folding is only a fraction of the work we do. On the other side, the Gilian project had a large object-related performance regressions because it uses this kind of objectful AST traversals in performanc-critical paths. (To implement formula substitutions in a symbolic analyser, if I remember correctly.)