Division by zero (newbie first question)

Hi. I just set my hands on Ocaml, expecting a mathematically rigorous language etc.

I’m a bit mystified as to why the language designers would allow the following “horror” to occur:

# let a = 0.;;
val a : float = 0.
# let b = -0.;;
val b : float = -0.
# let f x = 1. /. x;;
val f : float -> float = <fun>
# a = b;;
- : bool = true
# f a = f b;;
- : bool = false

So a = b does not imply f a = f b. Is this something that commonly occurs throughout the language? What is the design rationale to define floating-point division this way? I understand that some chips distinguish 0 from -0 and define division by zero this way, but this doesn’t quite seem a good enough reason to me to remove “substitutability” from the language.

Thank you!

1 Like

If you print the results of f a and f b, you’ll see that you get:

# f a;;
- : float = inf
# f b;;
- : float = -inf

OCaml uses IEEE 754 floating-point arithmetic, which defines that those operations must have the results you described. As to the virtues of having or not having a signed zero, whether they should be “equal”, etc., I am not qualified to say.

9 Likes

When you say ‘mathematically rigorous language’, what do you mean exactly? What defines a mathematically rigorous language, to you?

1 Like

It is mathematically rigorous, floats behave as floats, not as real numbers :wink:

Flambda people even use SAT for proving the correctness of float optimizations.

1 Like

When I used the vague term “mathematically rigorous” (sorry about that) I guess what I meant is that I expected there to be some kind mapping (think Curry-Howard correspondence) from an Ocaml program to a subset of first-order logic… and that the correspondence would send “=” to “=”, e.g. (I’m no specialist in Curry-Howard, type theory or first order logic, so I can’t comment more than that. But it seems hard to imagine that a direct correspondence mapping “=” to “=” and mapping Ocaml functions to mathematical functions would exist, given the above behavior.)

Anyway, it’s no my intention to start a food fight here, but I was genuinely surprised by this design decision to follow the IEEE754 arithmetic logic at the expense of breaking the ordinary logical meaning of “=” for floating point numbers. In particular, the fact that a = b does not imply f a = f b seems a huge drawback to me in terms of reasoning about programs. I guess I would be curious to know the following from advanced users:

  1. in practice, does one assume that a = b implies f a = f b (and multi-argument generalizations thereof), or does one not trust = this way? (Again: in practice.)

And if some OCaml guru / designer is in the room:

  1. Is the decision to follow the IEEE754 arithmetic while simultaneously having 0. = -0. evaluate to true currently seen as a design mistake? Or was it “all thought it” and the least of two evils?

I am much looking forward to learning OCaml, and again, it’s not my intention to start a food fight. I am just curious to know the perspective of advanced users on this issue. Thank you!

@Freyr666: our posts crossed, but I hope this clarifies my concerns.

# Float.equal (-0.0) 0.0
- : true

Structural comparison sucks, QED. Hope one day we would have Modular Implicits, but till then don’t use polymorphic compare and equality, it’s broken beyond repair and not what you want in 99% of cases. Many standard libraries like Base/Core and Containers shadow it when you do open Base.

Edit:

Ups, sorry it seems structural comparison works properly here:

# (-0.0) = 0.0
- : true

And -inf is not equal to inf, so everything is mathematically rigorous, it seems :wink:

For an example of the interest of (handling correctly) -0, you can see “Borda’s Mouthpiece” on p. 12 of “How Java’s Floating-Point Hurts Everyone Everywhere”.

1 Like

Basically OCaml tries to respect what processors do and what systems programmers expect. This includes having its floats behave according to standard even though at first look to a mathematician the behaviour is wonky (but actually when you consider the constraints it makes sense).

Another similar thing is how the mod operator denotes the modulo operation that % denotes in C and that processors do, which does not behave like mathematicians would expect in the presence of negative arguments (this one is just a design mistake on the part of early computer engineers imo, because the actual behaviour is less useful than the mathematically proper one).

1 Like

Erm, to clarify, I think IEEE754 says 0. = -0. must evaluate to true.

1 Like

How does one end up with a -0. in the first place?

Would it ever be the result of a computation?

Yeah, if you do something like -1e-200 /. 1e200, you’ll get -0. It’s also possible in some other ways, but I think it’s usually a result of rounding a very small negative number to zero. (See also wikipedia).

1 Like

In particular, the fact that a = b does not imply f a = f b seems a huge drawback to me in terms of reasoning about programs.

Also, note that this kind of equational reasoning does not hold in presence of side effects (which exist in OCaml).

# let f = 
  let counter = ref 0 in 
  fun () -> incr counter; !counter;;
val f : unit -> int = <fun>
# f ();;
- : int = 1
# f ();;
- : int = 2
# f () (* 3 *) = f () (* 4 *);;
- : bool = false
2 Likes

If you think about the definition in terms of limits, what is defined in the float standard is the behaviour that makes most sense imo:

lim_{x -> 0-} x = 0     # here 0- means approaching zero "from the left"
lim_{x -> 0+} x = 0     # here 0+ means approaching zero "from the right"
lim_{x -> 0-} x = lim_{x -> 0+} x = lim_{x -> 0} x = 0 

However

lim_{x -> 0-} 1/x = -inf
lim_{x -> 0+} 1/x = inf

and in fact

lim_{x -> 0} 1/x does not exist

I think here the float standard is more coherent and correct than assuming 1/x = inf. Having a=b imply f a = f b makes sense only if f is actually well defined at a (or b), but division by zero is not defined at 0.

3 Likes

It’s sometimes possible to reason informally in that way, when you know something about f, a, and b. But the property doesn’t hold in general. Here’s another example where it doesn’t hold:

# let a, b = ref (), ref ();;
val a : unit ref = {contents = ()}
val b : unit ref = {contents = ()}
# let f x = x == a;;
val f : unit ref -> bool = <fun>
# a = b;;
- : bool = true
# f a = f b;;
- : bool = false
1 Like

Because OCaml has nontermination, it is an inconsistent logic. In fact, with nontermination, you can prove anything. If you want to experiment with the Curry-Howard correspondence, I would recommend something like Coq (written in OCaml!) or Agda.

Remember that in the Curry-Howard correspondence, propositions are types. The (=) operator is not a type constructor, and it is not what propositional equality maps to. Martin-Lof Type Theory defines an “equality” type denoted Id A x y that represents propositional equality between x and y, terms of type A. In OCaml, something similar to it could be implemented as:

type ('x, 'y) id =
  | Refl : ('x, 'x) id

but because OCaml is not dependently typed, you can only compare types for equality, not terms.

Fun fact: Homotopy Type Theory, an ongoing subject of research, gives new meaning to propositional equality! Both Coq and Agda have HoTT libraries.

3 Likes

Maybe I’m too late to the party, but someday I just happened to across an OCaml library implements constructive real numbers that can make someone expecting a rigorous programming language happy.

Jean-Christophe Filliâtre : Creal (lri.fr)

There’re other useful or not so useful OCaml small scale libraries under the same homepage, some of them are mentioned elsewhere in this forum.

2 Likes