It is understood that machine representation of integers in OCaml is different from that in C; in particular, on 64-bit architecture, an OCaml integer occupies one word (i.e. 64 bits) with the least significant bit set to 1 and the top 63 bits encode an integer in two’s complement.
The Long_val function from caml/mlvalues.h implements a simple algorithm to convert an OCaml machine integer to an equivalent C machine integer: just a right shift of the word by one bit !
This is confusing.
Consider the OCaml runtime representation of the integer -1 (minus one): it is a 64-bit sequence of 1’s. I expect it also to be the machine representation of -1 in C, but by Long_val, we have to right shift this 64-bit sequence of 1’s by one bit to get a sequence 011…11 which consists of one 0 followed by sixty-three 1’s: this by no means is the 64-bit two’s complement encoding of -1. What is wrong here?