Not sure if it is: See the comment on exporting rawfloat, which would solve the issue I had. Maybe Random.float has deliberately weaker guarantees. ![]()
There will be a conversion from int64 to float between the two ifs. Is that still optimized? (Just asking out of curiosity, I’m aware a single if is little cost.)
I did make some (rough) measurements before, and even the floating point operations seem neglible. Also, I learned in the other thread that Seq is expensive. But apparently creating the random data is the bottleneck.
Specifically: It seems to be worth creating two floats out of two floats by remembering the intermediate results.
I don’t think I do? I split it when creating the (ephemeral) sequence, not on every sample, am I wrong?
You mean adding up sample from uniform distributions to obtain a Gaussian one? I tried this with 12 samples (from 6 random int64s), but that was slower, apparently. (edit: I just noticed ziggurat is a different algorithm. I’ll look into that as an alternative, thanks for mentioning it.)
I don’t need an as-fast-as possible algorithm. But if possible, I’d like to omit unnecessary statements if that makes my code better readable (and not slower).
If someone is aware of another suitable algorithm to create a Gaussian distribution (I’ll want integers in the end), I’m thankful for advice.