I’ve read the source code to try to understand but still couldn’t wrap my head around it. Could anyone explain to me the rationale behind it? Can it ever return a non-integer value?
I surmise (only a hypothesis) that this is because in the past, Caml did not have a native Int64.t
type and the size of int
(signed 31 bits so 30 bits for positive values) was not enough to accommodate a timestamp with subsecond precision.
And yes, it can return non integer values:
OCaml version 4.14.1
Enter #help;; for help.
# #load "unix.cma";;
# Unix.gettimeofday();;
- : float = 1730323276.90061307
Edit: if you meant return a float instead of some record mimicking struct timeval
, then I don’t know. Maybe the fact that a record with two fields would be a bit more expensive than a (often boxed) float (but I don’t know).
As far as I know, using a float
is just another way of encoding the information of the POSIX timeval
which returns two fields containing the seconds and microseconds since the epoch, as integers. If you want just the number of (whole) seconds since the epoch, as an integer, you can do
int_of_float (Unix.gettimeofday ())
Cheers,
Nicolas
What I was initially thinking was a way to get exact amount of microseconds as integers, but AFAICT that’s not a generally supported operation in different languages too.
It’s supported by languages that support POSIX gettimeofday
. Apparently this isn’t exactly what you get with Ocaml’s Unix module, but of course you can roll your own. My casual impression is that the integer sizes in struct timeval
may vary; on my platform as I write this, tv_sec
is 64 bits, tv_usec
is 32.
Right. However, note that the microsecond value returned by Unix.gettimeofday
should be within 1 microsecond of the value returned by the operating system’s gettimeofday
.
Indeed, the Unix epoch lies (and will do so for the next 100 years or so) between 2^30 and 2^32 seconds. Hence the “jump” between any two consecutive double-precision floating-point numbers around a Unix epoch timestamp is roughly (2^32 - 2^30) / 2^52 = (3 * 2^30) / 2^52 = 3 * 2^-22 ~ 10^-6. This means that for the foreseeable future, the microsecond value returned by Unix.gettimeofday
is within one microsecond of the value returned by the operating system’s gettimeofday
.
Cheers,
Nicolas