I guess you can take this for what it’s worth; I offer it only in the spirit of “old guy who got burned a lot …”
The probity (trustworthiness) and (controllable) copiousness of log-lines is absolutely essential for debugging. For my entire commercial career, I never used a debugger except on perl scripts – because in all the runtimes that matters (e.g. Java/J2EE) if you used a debugger, you’d already perturbed your program’s execution enough that the bug vanished. For many, many significant runtimes, controllable-at-runtime logging (so, you can turn on and off log-lines at runtime without recompiling) is the only tool for debugging and program-understanding. Even when a program is run on the command-line, logging is an invaluable program-understanding tool.
Viz. recently I was trying to figure out where opam cached its tarballs. I was unable to figure it out, partially b/c I couldn’t coax opam to log the (nontrivial) UNIX commands it executed (e.g. tar, copying tarballs, etc). That was disappointing.
There’s an excellent paper from Google, about the Dapper system, which is worth reading. Dapper is basically “controlled logging” + “very interesting post-processing”.
To sum up: it’s really, really important that log-lines be trustworthy. 100% trustworthy. The standard way to ensure this, is to mandate that each log-line be presented all-at-once, and to format it into a buffer before writing it to storage/network And to ensure that logfiles can be -parsed- without recourse to anything special. Parsed without errors, and this in the face of faults of all kinds. If your logs aren’t trustworthy in faulty environments, they’re useless.