This is the first in (hopefully) a series of blog posts on defining an algebra of structural induction principles in type theory, borrowing inspiration from category theory. This is joint work with Patty Johann and Neil Ghani, and builds on work and ideas of many, notably Conor McBride. In this post, I'll just explain the background.
Structural recursion is a fundamental part of the definition of functions in Type Theory, and also in functional programming languages. A standard example is that of
length on lists (in Haskell syntax):
length : [a] -> Int length  = 0 length (x:xs) = 1 + length xs
length proceeds by looking at the constructor — either
_:_ — and then choosing a course of action. In the nil case, it evaluates to
0, in the cons case it recursively evaluates
length, on the sub-term exposed by the pattern matching, and then adds
1 to it.
Definition by structural recursion has the following two features:
It is always terminating, because we only ever call the function again on smaller elements of the inductively defined type.
It provides an obvious way of computing. The process is: look at the constructor, do an action. We can define what it means to run functions, and hence use Type Theory as a programming language.
The first of these, while a nice to have feature in a functional programming setting, is essential in Type Theory, since we need totality of defined functions to retain decidability of type checking.
Definition by structural recursion is, at least on the surface, extremely restrictive. The restriction to recursive calls only on terms that are immediate sub-terms of the current argument is especially restrictive. The usual way to demonstrate this restrictiveness is the functional not-in-place "quicksort":
quicksort :: [a] -> [a] quicksort  =  quicksort (x:xs) = quicksort l ++ quicksort (x:h) where (l,h) = partition x xs
For any reasonable definition of
partition, the lists
h will be sufficiently shorter than
x:xs to ensure termination, but naïve structural recursion cannot see this.
To get around this kind of limitation, people have tried several different approaches.
Give up on termination and just use general recursion. This is the approach taken by most functional programming languages. General recursion can also be encoded in Type Theory if you have co-inductive types, as shown by Venanzio Capretta.
Instead of requiring that everything be defined using structural recursion, bolt another termination checker on to the type checker. This is the approach taken by Agda 2, and to a lesser extent by Coq.
This has the advantage that natural looking function definitions can be written, much as they would be written in a functional language with general recursion. The downside is that the termination checker is something separate to the core type theory, abrogating the claim of type theory to be a "checkable language of evidence" (I got this description from Conor McBride).
Exploit structural recursion by making use of a general purpose inductively defined type
Acc that allows us to encode recursion on arbitrary well-founded relations (in Agda syntax):
data Acc (x : A) : Set where acc : (∀ y → y < x → Acc y) → Acc x
Functions are defined by structural recursion on a special
Acc x argument. To actually call a function that requires such an argument we need to provide a proof that
Acc x for the our chosen relation
_<_ and our argument
x. See Edward Z. Yang’s blog post for more information on doing this in Agda. As far as I am aware, this technique goes back to Bengt Nordström’s paper Terminating General Recursion.
In this technique, we are defining the function by structural recursion on the
Acc x argument, so it is this argument that drives the computation. Since we probably want to erase this argument at run-time, this creates a gap between the semantics of functions at compile-time and functions at run-time.
A variant on this method is to generate a new inductively-defined predicate for each recursive function that you want to define. This is the Bove-Capretta method.
In this series of blog posts, I’m going to explore a different technique that allows more complex structural recursion principles to be built up from basic structural recursion on inductively defined datatypes, in such a way that the data we are analysing still drives the computation. This idea goes back to (I think) Eduardo Giménez (apologies for the paywall link), who used it to justify Coq’s termination checker. The ideas were also promoted by McBride and McKinna’s The view from the left, where the systematic use of eliminators for defining functions in type theory was investigated. I’ll also be taking ideas from category-theoretic presentations of structural recursion, specifically Hermida and Jacobs’ original paper and Neil Ghani, Patricia Johann and Clement Fumex’s generalisation of it.
In the next post I’ll discuss how structural recursion is encoded in type theory and category theory.