Multi-shot continuations gone forever?

Interesting!

My (probably naive) mental model of HANSEI-style libraries, using multishot continuations, is that they are extensions/generalization of a non-probabilistic “logic/non-deterministic monad” that searches for the set of solutions to a problem. Multishot continuations are naturally used in non-deterministic computations at backtracking points, to explore different search directions and collect the result. It is possible to avoid multishot continuations by replaying the whole search from the start each time (reference: Capturing the future by replaying the past), but this involves duplicated computations so it is less efficient (reference: Asymptotic speedup with first-class control).

Can you give some intuition of how other approaches to probalistic inference work, that do not require multishot continuations? Are they also duplicating computations, or are they using a magic trick to avoid this issue with a different inference algorithm?

I tried to find an answer to this question by reading the internship report, but I couldn’t locate an answer. (The report mentions HANSEI in the related work, but it does not discuss this question.) The report explains that the inference algorithm, called HMC (Hamiltonian Monte Carlo), uses automatic differenciation; so it uses a sort of symbolic manipulation / analysis of the probabilistic program to sample. But does this avoid repeated computations? It may be the case instead that the differential is as large or larger than the program itself, and that the search algorithm using this differential in effect perform a program-sized computation at each search step, duplicating computations.

2 Likes