This question is elicited by the Transept post, and specifically by the linked-to Parsec paper on parser combinators. I’ve started reading hte paper, and have comparing-and-contrasting with the implementation of the Ocaml parser itself.
To wit, it seems like there are a number of places where it’s useful/important to have a preprocessor that takes a language that is most definitely isn’t made of parser-combinators, and does various syntactic transformations on it. The example that comes to mind is the heavy use of inlining, and specifically in order to both achieve modularity (of a sort) while preserving the ability for the grammar-compiler to detect and implement proper precedence rules. I’m betting there are other examples.
Also, I’m personally a massive LL(1) (over LALR) bigot, so it’s not that I’m persuaded by “yacc” or anything: it’s a general feeling that the preprocessing of a grammar by various algorithms/heuristics is valuable, and hence, that parser-combinators are not the way to go, to write language-processors.
Maybe I’m wrong. I’d love to be wrong. Those tools don’t fit well into the rest of our libraries and code, after all.