This sounds like the main checked-exceptions in Java gave the world. By adding a single exception deep in the stack you had to either handle it immediately or modify the entire call chain. There is a lot of Java code to convert a checked exception into a runtime exception just because it’s too annoying to do right, which means a lot of code handlers errors badly.
I wrote two blog posts on my experience using result
awhile ago, linked below. Much of it still holds. Many of the pain points others have mentioned do exist, but in my judgement, given the current state of Ocaml, results are strictly better (at the very least at the API boundary, assuming you can convince yourself no exceptions escape it) than exceptions. I also believe that the reasonable error values are necessary. For example, I know some APIs like some variation of ('a, string) result
which, IMO, is not a great API as I end up comparing strings and hoping the string value is actually part of the API and not some rando value tossed in there. Double for when meaningful aspects of the error are encoded in the string and I have to decode it to decide what to do.
For my own things I do require that all errors are convertible to a string so I can just show them to the user, this is especially important for development and debugging, IME. This is one of the few places where I do wish we had something like type classes so I could do something like:
foo ()
>>= function
| Ok () -> yadda
| Error err -> show err
YMMV
Regarding pretty-printing the exception, why isn’t deriving.show sufficient? IIRC, it supports pretty-printing of extensible variants, and exceptions are such … I remember when I implemented my camlp5/pa_ppx version of deriving.show, I supported extensible variants, and use it to pretty-print exceptions, mimicking the code in either deriving.show, or in deriving.yojson (I forget which, and it’s too late at night or me to go look it up).
It seems that we had some discussion on this
the part in the video at t=20m47s where Prof Thomas tells about the training course after years how to start the system because the operators forgot – that’s hilarious.
A bit late to the party, but here’s an overview of error handling methods that I did a while ago:
Composable Error Handling in OCaml (keleshev.com)
It compares the following approaches:
- Exceptions
- Result type with strings for errors
- Result type with custom variants for errors
- Result type with polymorphic variants for errors
I have re-read the article and decided to update it to use the recently-introduced Stdlib.Result
module and (let*)
syntax: https://keleshev.com/composable-error-handling-in-ocaml
This tends to be defeated by the implementation of Fun.protect
that always saves the backtrace just in case. If you use exceptions for performance you will more and more have to use raise_notrace
where it matters.
One rough edge I’ve found with open variant error types is that they don’t seem to work well with exhaustiveness checking. For example, the following compiles just fine despite the superfluous Error `bar
case:
module M : sig
val foo : (unit, [> `foo ]) Result.t
end = struct
let foo = Error `foo
end
let () =
match M.foo with
| Ok () -> ()
| Error `foo -> ()
| Error `bar -> ()
;;
We can change the type to val foo : (unit, [ `foo ]) Result.t
, but only at the cost of unification with other errors:
# let l = [ M.foo; Error `bar ];;
Line 1, characters 23-27:
Error: This expression has type [> `bar ]
but an expression was expected of type [ `foo ]
The second variant type does not allow tag(s) `bar
Or you can do a more painful coercion:
let l = [ (M.foo :> (_, [ `foo | `bar ]) Result.t); Error `bar ]
If anyone has a better technique for working with these, I’m all ears!
I’m holding onto the ledge of not diverging into a generalized “discourse” on the merits of monadic styles by my fingertips
This is a very helpful clarification; I was not aware offhand of raise_notrace
, thank you. I can certainly see its usefulness (especially for those rare cases when I might use exceptions for nonlocal returns and control flow that won’t escape some library boundary), but the prospect of code using this approach widely (and thus gutting exceptions’ utility for locating the source of errors) just for the hope of a smidgen more performance is unpleasant.
I think grounding any part of an exceptions vs. result debate/decision on performance is misguided in general. The ballgame for 99.5% of contexts is error-handling and sometimes recovery, and in these contexts, performance just isn’t a significant criterion in my experience.
(2) All monadic transformations of which I am aware (exceptions, state, control, I/O) are direct equivalents to the “standard semantics” for such language-features, e.g. as described in Michael J.C. Gordon’s book The Denotational Description of Programming Languages .
I won’t dispute the truth of this in any formal sense, just because the operational semantics of those primitives are formally equivalent to their monadic reformulations, doesn’t mean they are equivalent experientially or with regard to programmer understanding/comprehension. While in the fray, programmers think they understand what’s happening in their program when using mutable state or non-local control mechanisms or IO or concurrency primitives, but we clearly do not. The awkwardness that sometimes comes with monadic approaches to these primitives is often a consequence of our being forced to directly confront their essential complexity, rather than letting them float about ambiently.
Just as a brief experience report, I had a fantastic outcome recently in refactoring a stdlib Result
-based program of some complexity that also had some “side channel” state that had to be aggregated here and there. Handling the latter was incredibly awkward, using a ref off to the side…in my prior lisp life, I would have used a dynamically-scoped variable. But, BAP’s monads transformer library was waiting for me, and I was able to replace the mess with a very tidy State+Result monad, with zero performance penalty according to my benchmarking at the time of the switch.
OCaml exceptions are really cheap because they don’t materialize that backtrace until demanded.
Unfortunately, that is not true. If you have compiled your OCaml program with option -g
and if you have called Printexc.record_backtrace
, the exception backtrace is materialized as soon as you call Stdlib.raise
. (As already hinted, that is why Stdlib.raise_notrace
exists in the first place.) There is no way to create this backtrace after the fact. Either it is created by Stdlib.raise
or it does not exist.
I don’t have a solution to the problem, but just pointing out that this plays with with exhaustivness checking: if you had failed to handle the foo
then you would get an error as expected, so the errors are checked exhaustively. But yes, it does mean you can try to handle errors you’ll never actually get.
Not a great solution but you can specify the type, see below. Also, if you don’t necessarily care about the specific error but the type you can catch with match foo with | #M.err -> ...
module M : sig
type err = [`foo]
val foo : (unit, [> err ]) Result.t
end = struct
type err = [`foo]
let foo = Error `foo
end
let () =
match M.foo with
| Ok () -> ()
| Error (`foo : M.err) -> ()
| Error (`bar : M.err) -> ()
;;
TL;DR it appears that @silene is correct, that OCaml does record the backtrace (and pays a cost proportional to stack-depth) but this is only a small part of the cost of “materializing the backtrace” into the heap (which would need to be done to carry it around in a Result).
First, thank you for pointing this out: my understanding was based on seeing that exception backtraces aren’t reliably recorded, and the only way to really be sure of getting one, is to ask for it right after the exception is caught the first time.
But second [now, digging thru the source … ah, isn’t the source a dream to read thru?] I find that there are two steps: (1) copy the backtrace from the stack-frames into a “backtrace buffer” (which appears to be associated with the global state (and hence apparently static ?), and (2) copying that into a heap-allocated array for the application to use or discard, e.g. for adding to a Result.
I wrote a little test to check this out. I’ll insert the results, followed by the test and makefile. It’s possible I’ve done something wrong. The test:
(1) call record_backtrace
(2) with a flag, either calls get_backtrace() or does not, at the point it catches the exception
(3) and then K times, does a depth-N recursion, where it does a try/with block – within which it does a depth-M recursion, and inside that it does a failwith.
(4) so as we vary N, we are varying the depth of the stack
(5) varying M varies the depth of the stack between the try-catch and the raise (not as interesting)
(6) and we can control whether get_backtrace gets called to materialize the backtrace to the heap.
(7) the test is run as test1 <get-backtrace> <n> <m> <k>
It seems that indeed, the cost of a raise/catch is proportional to the depth of the stack. But also that get_backtrace
adds 10x to that cost.
Results:
./test1 false run1 1000 10 100
run1@100: 0.000066
./test1 true run1 1000 10 100
run1@100: 0.052373
./test1 false run1 10000 10 100
run1@100: 0.000741
./test1 true run1 10000 10 100
run1@100: 0.054388
./test1 false run1 100000 10 100
run1@100: 0.006881
./test1 true run1 100000 10 100
run1@100: 0.077785
source (e1.ml)
Printexc.record_backtrace true ;;
let get_backtrace = ref true ;;
let depth pre f post n =
let rec drec n =
pre () ;
let rv =
if n = 0 then f() else drec (n-1)
in post () ; rv
in drec n
;;
let raiser () =
failwith "caught" ;;
let catcher pre post k =
try
depth pre raiser post k
with Failure _ ->
if !get_backtrace then
ignore(Printexc.get_backtrace())
;;
let nop () = () ;;
let harness ~tag reps f =
let stime = Unix.gettimeofday() in
let () = f() in
let etime = Unix.gettimeofday () in
Fmt.(pf stdout "%s@%d: %f\n%!" tag reps (etime -. stime))
;;
let bt = bool_of_string Sys.argv.(1) in
let tag = Sys.argv.(2) in
let n = int_of_string Sys.argv.(3) in
let m = int_of_string Sys.argv.(4) in
let reps = int_of_string Sys.argv.(5) in
get_backtrace := bt ;
harness ~tag:tag reps (fun () -> depth nop (fun () -> catcher nop nop m) nop n)
;;
Makefile
test:: all
./test1 false run1 1000 10 100
./test1 true run1 1000 10 100
./test1 false run1 10000 10 100
./test1 true run1 10000 10 100
./test1 false run1 100000 10 100
./test1 true run1 100000 10 100
all: test1
test1: e1.ml
ocamlfind ocamlc -g -package fmt,unix -linkall -linkpkg -o test1 e1.ml
but this is only a small part of the cost of “materializing the backtrace” into the heap (which would need to be done to carry it around in a Result).
Not quite. There is no good reason for any code to ever be calling Printexc.get_backtrace
in practice. This function should only be called when you actually want to pretty-print the backtrace for the user (which almost never happens). One should call Printexc.get_raw_backtrace
instead (and then call Printexc.print_raw_backtrace
, but only if actually needed). If you run your benchmark again, the timings you get will be mostly in the noise range, since Printexc.get_raw_backtrace
only allocates a block and copies the backtrace into it (as you already noticed).
Now, back to the topic of Result
. An implementation would not call Printexc.get_raw_backtrace
, since the whole point of Result
is to not raise exceptions, so there is no exception backtrace to record in the first place. Instead, it would call Printexc.get_callstack
. So, the real question is: how much slower is it to call Printexc.get_callstack
rather than just recording the backtrace for consumption by Printexc.get_raw_backtrace
?
Fundamentally, the code is the same, with only two differences. First, there is a heap allocation (as does Printexc.get_raw_backtrace
, for the same reason), but hopefully this will not be noticeable performance-wise. Second, exceptions record the backtrace only till the next exception handler. But for Printexc.get_callstack
, it is up to the user to guess the length of the backtrace to be recorded.
So, to summarize, whether you use exceptions or Result
, the overhead of backtraces will be exactly the same, as long as the Result
code correctly estimates the size of the backtrace to record. If the backtrace requested by Result
is systematically 10x the average length of an exception backtrace, then there will be a 10x overhead.
Now, back to the topic of
Result
. An implementation would not callPrintexc.get_raw_backtrace
, since the whole point ofResult
is to not raise exceptions, so there is no exception backtrace to record in the first place. Instead, it would callPrintexc.get_callstack
.
FWIW, I have never done this in my result
code. Once I’m making an API public, the errors should be descriptive enough to debug the cause without a backtrace.
On a philosophical tangent: IMO backtraces are not a thing a developer should want. In development it’s maybe useful but in production software, it’s a sign that the error handling is not being done well. It’s the ultimate breaker of abstraction. Of course, there may be pragmatic reasons to want a backtrace, but I think most of the arguments depend on living in a world where error handling is an after thought because it’s hard.
Now, back to the topic of
Result
. An implementation would not callPrintexc.get_raw_backtrace
, since the whole point ofResult
is to not raise exceptions, so there is no exception backtrace to record in the first place. Instead, it would callPrintexc.get_callstack
.
And a minor bit of clarification: Printexec.get_raw_backtrace
’s documentation admonishes “Same restriction usage than Printexc.print_backtrace
”, which declares:
If the call is not inside an exception handler, the returned backtrace is unspecified. If the call is after some exception-catching code (before in the handler, or in a when-guard during the matching of the exception handler), the backtrace may correspond to a later exception than the handled one.
Indeed, if you replace get_callstack
in the little example I had in my original post with get_raw_backtrace
, the returned backtrace has no slots in it, and so prints nothing (though I suppose “unspecified” could mean even worse things depending on your choice of compiler or target runtime).
FWIW, I have never done this in my
result
code. Once I’m making an API public, the errors should be descriptive enough to debug the cause without a backtrace.
I find these kinds of “shoulds” to be very counterproductive. Of course descriptive error messages are great, but all other things being equal: more, better information that helps a downstream developer (often yourself!) is always preferable over less, worse information.
On a philosophical tangent: IMO backtraces are not a thing a developer should want. In development it’s maybe useful but in production software, it’s a sign that the error handling is not being done well. It’s the ultimate breaker of abstraction. Of course, there may be pragmatic reasons to want a backtrace, but I think most of the arguments depend on living in a world where error handling is an after thought because it’s hard.
This is demonstrably untrue. Bad things happen usually happen in unexpected ways, and the worst bad things definitionally happen in production. If this weren’t the case, you wouldn’t have massive industries built around production observability, APM, and incredibly sophisticated tooling in the largest ecosystems to help analyze production failures rooted in programming faults.
This is demonstrably untrue. Bad things happen usually happen in unexpected ways, and the worst bad things definitionally happen in production. If this weren’t the case, you wouldn’t have massive industries built around production observability, APM, and incredibly sophisticated tooling in the largest ecosystems to help analyze production failures rooted in programming faults.
I argue that this is because people are using error reporting tools that go around the static analysis in most languages. The time I need a backtrace in Ocaml is when some code threw an exception that I wasn’t expecting, or I made a human mistake and forgot to handle it. This very well could be confirmation bias but this issue has never happened to me when using result
because the errors are always in my face. There is never a hidden error. So I don’t need a backtrace to figure out how I got to a point where the weird exception was raised.
A former friend once told me that he always checks every return-code. He wrote systems-level code (comms, storage) so sure. I’ve read and debugged massive piles of code in my job troubleshooting websphere dumpster fires. And sure, there were lots of times that code would “swallow” Runtime (unchecked) exceptions. But there were also lots of times that code would swallow checked exceptions, for want to anything else to do. The idea that somehow most application programmers actually know what to do with many of the exceptions they see … is somewhat unrealistic.
Let me say that differently: sure, for the OCaml community of hard-core researchers and (maybe) systems-jocks, expecting everybody to handle every exception and error is reasonable. But for the community into which you all expect OCaml to make inroads (unless you think that Reason/bucklescript/whatever are doomed), these assumptions are counterfactual. Backtraces exist and are valuable because people don’t handle errors well. That’s [that people don’t properly handle their error-conditios] a reality of all application programming in all environments, across all time.
Unless you are extremely careful (wrapping all functions that can potentially can throw an exception), exceptions can still leak and finding those without a backtrace is hard.