Let me try first to address your questions inline and then I will try to give a walkthrough that will showcase how can we start with the approach proposed by Oleg and gradually improve it until we reach our final representation, which is theoretically equivalent but suits our needs better.
The Knowledge library is not specific to BAP neither to binary or program analysis. It is a generic library for developing knowledge reasoning systems. In short, it provides two features - efficient universal values and a dictionary of universal values hidden behind a monadic interface. Underneath the hood it is an error monad, spliced with the state monad. We narrowed the interface a little bit to enable future extensions. We’re thinking about adding the Continuation monad to enable retraction. But so far we’re good without them.
Yes, it is daunting and albeit it is possible to write many analyses in the final style (and for some analyses, it is even easier) we decided that forfeiting the classical direct style is not an option for us. One of the design constraints that we have, is that existing analyses shall work without any changes. In fact, we had a few design constraints that shaped our design, however, these constraints do not come from the field of program analysis and are not domain-specific, so they should be applicable to other projects as well. Our constraints came from the software design universe - we need even more extensibility than what tagless final was providing.
But back to the good old initial style. If you will examine carefully the modular and composable optimizations work, you will notice that every optimization is building a small DSL of the form
type 'a term =
| Domain : int term
| Specific : string term
| Constructors : unit term
| Rest : 'a Other.term -> 'a term
and all analyses are still written in the convenient classical style with the pattern matching. The analyses that require state or context are implemented using corresponding monad, e.g., type 'a repr = ctxt -> 'a term and type 'a repr = ctxt -> 'a term * ctxt.
In BAP we managed to reify this design pattern into OCaml representation. With monads it is simple - we just provided a common monad that will satisfy everyones needs - the knowledge monads. For the common representation we used universal values, but later on them.
To summarize, in BAP you can write a fold-style analysis in the final form. If you wish, you may pick a representation and write an analysis in the initial style. If your representation is initial, then your analysis will be as generic as the final one. Otherwise, you may lose some information.
We use HOAS, but still alow users to introduce variables with arbitrary names. We handle it at lifting/parsing, i.e., when we reflect concrete representation into abstract terms. And of course, it is possible to introduce indexed variables later, there is no limitation on that. We also distinguish between read-only pure variables, which are introduced using val scoped : 'a Value.sort -> ('a var -> 'b pure) -> 'b pure and mutable fresh variables. There are also global variables, which are not HOASed, and denoted by their names.
We represent abstract types with existentials, aka universals. Theoretically, it is the same. In practice, it lets us to express our intents to the type checker in a much clear way. In OCaml abstract data types are too unrestricted, which limits their applicability.
Here 'a is the type index that denotes the modulus of bitvector arithmetic, so that we can express, that add is defined only for bitvectors of the same width and that add : u8 bitv -> u8 bitv -> u8 bitv is not the same as add : s32 bitv -> s32 bitv -> s32 bitv. Besides, other than modulus we encode signedness of the bitvector. This is all part of the knowledge framework which lets us build hierarchies of classes each partitioned into sorts, and make our constructors well-typed, but at the same time substantially polymorphic (so we do not need to have add8, add32 and so on up to infinite number of constructors).
In general, we can express arbitrary properties for terms, which in general will require dependent types. The only caveat is that these properties could be introduced abitrary by a module which define that property. This basically, implies that for each property there is a trusted kernel, which basically has an abstract interface:
module type Prop = sig
type prop
val prove : 'a term -> prop sort term option
end
Where prove is the only function that can construct values of type prop sort and performs a check (which could be unsound) or whether this property holds for the given term. This is the only weak point.
Concerning GADT. First of all it is possible to implement the prove function using a GADT representation for a term (since the final form allows for any representation, it is also possible to represent a term as a GADT). Then if we can prove the property, the type checker will be able to verify our proof. Of course, only a limited set of properties could be proved this way.
Another issue with GADT is that we can’t really serialize the proved property as it usually doesn’t have any physical representation. Thus, when we unparse/load a stored program we have to repeat all proofs from scratch.
In BAP we use nominal witnesses to store term properties, so we can store and reconstruct them back.
It is just a shortand for “evaluate this term, extract its sort, and call a function on the sort an term evaluation”. It is common that the semantics of an operation depends on the sort of therm, e.g., the signedness of the representation, the address space of the memory, the size of the storage, etc.
No, it is about manipulating theories (aka algebras, aka structures, aka analyses) as first-class values.
Now we’re ready for the long part of the answer 
The long road towards the final style
Let’s start with the classical representation of the tagless final style using functors, given the signature
module type SYM = sig
type 'a repr
val int: int -> int repr
val add: int repr -> int repr -> int repr
end
we can build terms in that signature using a functor,
module Ex2(I:SYM) = struct
open I
let res = add (add (int 1) (int 2))
(add (int 3) (int 4))
end
But functors are syntactically heavy. They are also not first-class values, and one of our constraints is that the set of analyses is not known in static time. We employ a plugin system, and want a plugin to be able to register an analysis pass in our pipeline, without a need for recompilation or relinking.
This is another constraint that we have - we need first-class algebras. Ok, we have first-class modules, and indeed, the syntactically heavy functor Ex2 could be represented as a function that is parametrized by a packed module, e
let ex2 (module I : SYM) =
let open I in
add (add (int 1) (int 2))
(add (int 3) (int 4))
This won’t type-check as the type I.t escapes its scope. There are no closures for types in OCaml. No problem, we know that we can extend the scope by quantifying it, usually we do something like,
let ex2 (type t) (module I : SYM with type repr = t) : t =
let open I in
add (add (int 1) (int 2))
(add (int 3) (int 4))
Wait, repr is not a ground type in SYM but a parameterized type, in other words, it is a type scheme that denotes a whole family of types. We cannot quantify 'a repr = t for all 'a and this is a fundamental problem, which highlights the difference between functors and functions parametrized by modules.
We could probably turn 'a repr into a monotype, but we want our algebras to be many-sorted and, in general, to rely on typed representation, rather than coercing everything to the single type. So this is not an option for us.
So is this problem that fundamental or, maybe, we just don’t know the right syntax to persuade OCaml to accept our definition? Indeed, given the current signature, it is not immediately obvious what can go wrong, as we can construct only int repr terms1. Let’s make our example a little bit more illustrative, and add string repr to our signature
module type SYM2 = sig
type 'a repr
val int: int -> int repr
val add: int repr -> int repr -> int repr
val out : string repr -> unit repr
val str : int repr -> string repr
end
Now, we have an obvious problem,
let ex3 (type t) (module I : SYM with type 'a repr = t) : t =
let open I in
out (str (add (int 1) (int 2)))
We have our type constructor t unified with unit repr, string repr, and int repr at the same type. So, is it a dead-end and we have to choose another host language?
Not really, probably we chose a wrong representation, as indeed we’re asking too much of polymorphism in the ex2 function. So, let’s revisit the functor representation
module Ex2(I:SYM) = struct
open I
let res = add (add (int 1) (int 2))
(add (int 3) (int 4))
end
What if fundamentally says is that res is parametrized by a Sigma-algebra with many-sorted carrier 'a repr and operation add and int. To represent this we need bounded polymorphism and a way to specify those bounds to OCaml, otherwise, we will fall into the same pit, OCaml will either try to overgeneralize our types (i.e., quantify them over types that we do not need) or undergeneralize them (i.e., will lose polymorphism). Bounded polymorphism brings existential quantification into our memory and we immediately recall that there are a few other representations of existentials in OCaml. So, maybe instead of keeping 'a repr abstract and bounding it with a signature, we shall turn it inward and represent repr as a universal (well indeed, we all know that abstact types have existential type)
type 'a sort = string (* for demo purposes *)
type 'a repr = Term : {
sort : 'a sort;
repr : 'r;
} -> 'a repr
With this representation, we explicitly stated that only the representation 'r is existential, and now we can define a signature without 'a repr,
module type SYMR = sig
val int: int -> int repr
val add: int repr -> int repr -> int repr
end
and, finally, we can type our example
let ex2 (module I : SYMR) =
let open I in
add (add (int 1) (int 2))
(add (int 3) (int 4))
and even our extended example,
module type SYMR2 = sig
include SYMR
val str : int repr -> string repr
val out : string repr -> unit repr
end
let ex3 (module I : SYMR2) =
let open I in
out (str (add (int 1) (int 2)))
The only problem is that we made our representation too abstract, so it is now impossible to write even a meta-recursive interpreter, e.g.,
module Ru : SYMR = struct
let int x = Term {sort = "int"; repr = x}
let add (Term {repr=x}) (Term {repr=y}) = Term {
sort = "int";
repr = x + y;
(* ^^^^^ *)
(* `x` and `y` are existential and of different types *)
}
end
So we need to add a type witness parameter to our representation which will allow us to prove that representations are of the same type,
type 'a witness = ..
type ('a,'b) eq = Equal : ('a,'a) eq
type 'a repr = Term : {
sort : 'a sort;
tkey : 'r witness;
repr : 'r;
} -> 'a repr
module Ru : SYMR = struct
type _ witness += Int : int witness
let tkey = Int
let int x = Term {sort = "int"; repr = x; tkey}
let add (Term {repr=x; tkey=Int}) (Term {repr=y; tkey=Int}) =
int (x+y)
end
And now we can finally write our metacircular interpreter. With one caveat, that our add is no longer total. Indeed, since we moved 'a repr to the global scope, the Ru.add function could be applied to the representation of any other interpreter. We will resolve this issue later, hang on, there are other serious issues that we have to resolve before that.
Let’s go back to the original modular optimization framework in the final style. It is denoted as
module type Trans = sig
type 'a from
type 'a term
val fwd : 'a from -> 'a term (* reflection *)
val bwd : 'a term -> 'a from (* reification *)
end
module SYMT(X:Trans)(F:SYMOBS with type 'a repr = 'a X.from) = struct
open X
type 'a repr = 'a term
type 'a obs = 'a F.obs
let int x = fwd (F.int x)
let add x y = fwd (F.add (bwd x) (bwd y))
let observe x = F.observe (bwd x)
end
Again, we see the good old bounded polymorphism, nothing that we can’t handle. But there is also a problem that contradicts with our design requirements. Actually with two of them. In this framework, we have to explicitly and statically specify the transformation pipeline. E.g., we need to know in which order the transformations should be applied, moreover, we need to know the analysis that we want to use at compile-time.
This breaks our modularity, as we want to be able to specify an analysis in a plugin and the load it dynamically. Most importantly, analyses in program analysis (sorry for a tautology) do not really form a pipeline. They do not even fit into the DAG abstraction. In general, and in binary analysis specifically, they form a graph with cycles. And this graph builds dynamically, so any analysis may depend on any other analysis and our framework shall detect this graph automatically and run analyses in the correct (and optimal order) to obtain the fixed point solution as fast as possible.
So we naturally (maybe not that naturally, but I’m already writing this post the whole day so I have to fast-forward) come to our solution. Our universal values are bounded to be domains (optionally upgradeable to lattices). Moreover, we run all our analyses (or all analyses applicable to the context) in parallel, so our universal value is a key-value dictionary underneath the hood, or a universal record, where each entry denotes a particular representation (or semantics, or denotation) of a term.
The theories themselves, are also domains, so we can store theories as values and manipulate them, join them, even compare their partial orders. In addition, not only theories are first-class values, but the contexts which define theories are also reified, so we can also manipulate them and run analyses in different contexts (here, by context, I’m understanding the context of the theory application, in the Haskell sense of this word, i.e., the context which defines which type class instance, or, in our case structure, should be applied to the given terms).
I’m sorry that is is a little bit messy, I just didn’t have enough time to write it in a more concise and straightforward way
Feel free to ask questions so that we can refine this explanation.
1) It is not true actually, there is no evidence that 'a I.repr could be only created with constructors from I. It is only possible to provide such evidence using (G)ADT, e.g., with signature
module type S = sig
type 'a repr =
| Int : int -> int repr
| Add : int repr -> int repr -> int repr
end
we can at least persuade type checker that values of type _ I.repr could only be formed with I.Int and I.repr constructors, but it won’t help us a lot and will hamper the extensibility (and this is our main design constraint - to be able to add new language forms).