Mysterious performance difference between close to identical programs

Again I would recommend having a look at Producing Wrong Data Without Doing Anything Obviously Wrong! by Mytkowicz, Diwan, Hauswirth and Sweeney, 2013.

They observed measurement bias due to linking order or environment variables (!), both well above 8% on some benchmarks, and there are of course other sources of bias. An 8% difference due to code placement does not seem so surprising in this light. (And yes, it means that drawing conclusions from benchmarks is very difficult! One recommendation could be to avoid micro-benchmarks that can magnify these effects, and focus on more integrated macro-benchmarks.)