## On Structural Recursion II: Folds and Induction

Last time I talked about the background on defining structurally recursive functions in Type Theory and why you might want it. The key point is that structural recursion is driven by the data that is being analysed, as opposed to just doing its own thing with a side-proof that it always terminates.

The goal here is to come up with a self-contained and syntax-free definition of a dependently-typed structural recursion principle for a type `A`

. To start with, this will be purely for normal structural recursion over inductively defined types. In future posts I will extend this to other types by combining existing structural induction principles.

In this post I'll cover how to present structural recursion and induction in a generic way. The first is probably well known to most people reading this, being just the standard stuff about initial `F`

-algebras, but hopefully the second will be new.

The generic approach to structural induction I'll describe below underlies how Epigram 2 and my own language Foveran implement structural induction in a generic way. More information on Epigram's representation of data types is available in the paper The Gentle Art of Levitation.

My last post led to a very interesting discussion on reddit, which quickly left the measly content of my original behind. Just to clarify some of the points that came up in that discussion, I am only talking here about structural recursion/induction over inductively defined types---by which I mean types that arise as initial fixpoints of `F`

-algebras---not co-inductive types (though the general technique for handling structural induction I'll describe below does extend to co-induction, at least in a categorical setting).

### What is Structural Recursion?

Structural recursion is often presented as a restriction. In Coq or Agda, the programmer enters a recursive definition, and the system inspects it to make sure it fits the template of acceptable definitions. This "generate and test" approach naturally leads most people to think that the system is just being stupid when it refuses to accept their definition.

An alternative is for the system to be up-front about the definitions it is prepared to accept. Instead of writing recursive definitions and having them rejected, the programmer uses special-purpose recursion combinators to write their programs. This is the basic idea behind recursion schemes, as in the classic paper Functional Programming with Bananas, Lenses and Barbed Wire. An important extra benefit of using recursion schemes, as pointed out in that paper, is that they often also come with reasoning principles.

I am interested in going beyond normal recursion schemes, and doing *dependent* recursion schemes as well. This allows us to use structural recursion as a method of reasoning as well as a method of definition, and to intertwine the two if we feel like it. First though, I'll go over the standard stuff about how to treat the most basic recursion scheme at a general level.

#### Folds, Catamorphisms and F-Algebras

We can represent structural recursion on lists (with elements from a set `A`

) using the right fold function:

```
foldr : (B : Set) → B → (A → B → B) → List → B
```

which satisfies the following two equations:

```
foldr B n c nil = n
foldr B n c (cons a as) = c a (foldr B c n as)
```

As I mentioned last time, we can read these off directly as computation rules: when the implementation of `foldr`

sees `nil`

in its last argument, it knows to use its "nil" parameter; and when it sees `cons a as`

in its third argument it knows to use the "cons" parameter. Thus we have a method for computation, driven by the structure of the data we are processing.

The well-known category-theoretic generalisation of this makes use of `F`

-algebras. The basic idea is that the constructors of our inductive type are encoded as a functor `F : Set → Set`

. For instance, the constructors of lists, with elements from some set `A`

are encoded using the functor (I've only defined it here on objects, but the action on morphisms is obvious):

```
ListF : Set → Set
ListF X = 1 + A × X
```

The first two arguments of the `foldr`

function can now be captured as a function `f : ListF B → B`

. A pair of a set `B`

and such a function `f : F B → B`

is called an `F`

-algebra. Given two `F`

-algebras `(k₁ : F B → B)`

and `(k₂ : F C → C)`

, we define `F`

-algebra morphisms to be functions `h : B → C`

such that `h ○ k₁ = k₂ ○ F h`

. It is not too difficult to see that `F`

-algebras and their morphisms form a category, called `F`

-Alg.

For quite a large class of functors `F`

, `F`

-Alg has a (unique up-to isomorphism) initial object. Spelling this out, we have an `F`

-algebra `(in : F µF → µF)`

such that for any other `F`

-algebra `(k : F B → B)`

there is a *unique* `F`

-algebra morphism from `(in : F µF → µF)`

to `(k : F B → B)`

.

For the lists example, an initial algebra for the functor `ListF`

is the set of lists with elements from the set `A`

, with `(in : ListF (List A) → List A)`

defined simply as the construction of lists using `nil`

and `cons`

. By initiality, if we have a `ListF`

-algebra `(k : ListF B → B)`

we get a function of type `List A → B`

. Since it is a `ListF`

-morphism it satisfies the two equations for `foldr`

above, so we can interpret `foldr`

.

More generally, we can interpret specifications of inductive types as functors `F`

, the inductive types themselves as the carriers of initial algebras and structural recursion on them as making use of the initiality property to generate homomorphisms. Note that not every `F`

you can write down necessarily has an initial algebra; for instance, `F X = (X → 2) → 2`

doesn't have an initial algebra in the category of sets and functions.

In the recursion schemes literature, folds are referred to as *catamorphisms*. Edward Kmett has written a comprehensive Knol on catamorphisms. He also has a field guide to recursion schemes, for both data and codata.

### What is Structural Induction?

From a high level, (constructive) structural induction is just structural recursion with fancier types. And from a computation viewpoint this is fairly accurate: computation still proceeds by looking at data and then deciding what to do next. But we still need to work out what those types should be.

#### Dependent Structural Recursion (a.k.a. Structural Induction)

In pure Type Theory (i.e. without extensions for pattern matching and recursive definitions), structural recursion is presented in terms of elimination principles. Continuing the example of lists, we are provided with a pair of constructors `nil : List`

and `cons : A → List → List`

and an eliminator:

```
elimList : (P : List → Set) →
(P nil) →
((a : A) → (l : List) → P l → P (cons a l)) →
(l : List) → P l
```

This looks very similar to the type of the `foldr`

function above. The difference is that the return type depends on the input list, and this dependency is carried through in the types of the "nil" and "cons" parameters (the second and third parameters).

As with the `foldr`

function, `elimList`

satisfies a pair of equations, which are almost identical, except for some extra information being passed in the `cons`

case.

```
elimList P n c nil = n
elimList P n c (cons a as) = c a as (elimList P n c as)
```

In presentations of type theory, the construction of elimination principles is usually defined syntactically by looking at the types of the constructors. See, for example, Dybjer's presentation of inductive families. This seems a little ad-hoc: one might hope for a generic way of extracting dependent eliminators for inductive types just from their presentations as initial `F`

-algebras. Happily, this is possible.

#### Structural Induction for initial `F`

-algebras

Given a functor `F : Set → Set`

that has an initial algebra `(in : F µF → µF)`

we first need to derive the type of the generic elimination principle for `µF`

. The generic fold operator for `µF`

has this type:

```
fold : (B : Set) → (F B → B) → µ F → B
```

Looking at the similarities between the `foldr`

function on lists and the `elimNat`

eliminator, we can guess that a generalisation should look like this for general `(in : F µF → µF)`

:

```
elimF : (P : µF → Set) →
((x : F µF) → lift F P x → P (in x)) →
(x : µF) → P x
```

I'll attempt to explain this type with reference to the type of `elimList`

. The first parameter, `P : µF → Set`

is the dependent type we are eliminating to, and is exactly the same. The second parameter is the "algebra" component. It is a function that takes a value of type `F µF`

. In the case of lists, this would either be `inl *`

(representing nil) or `inr (a,l)`

representing cons. So, similarly to `F`

-algebras representing the multiple parameters to `fold`

, we are encoding all the cases into one. The second argument of the algebra component is a little trickier. I have assumed a functor `lift F : (µF → Set) → (F µF → Set)`

that takes `P`

and `x`

and produces some set. This part somehow encodes the "inductive hypothesis" for this structural recursion principle. I'll explain this further below. Finally, the "algebra" part ought to return a value of type `P (in x)`

, corresponding to the `P nil`

and `P (cons a l)`

parts of the `elimList`

type.

Given such an "algebra" parameter, `elimF`

promises to take any element `x`

of `µF`

and return a value of `P x`

.

##### Lifting `F`

I need to give a definition for `lift F : (F µF → Set) → (µ F → Set)`

now. In the case of lists, we can see that the following definition gives the right answer:

```
lift F P (inl *) = ⊤
lift F P (inr (a,l)) = P l
```

Unfolding this definition in the type of `elim`

above, it is easy to see that it is an encoding of the original type of `elimList`

.

But how do we do this in general for any `F`

? Hermida and Jacobs did this back in the nineties for polynomial `F`

(i.e. `F`

s constructed from constants, sums, products and the identity functor) by induction on the structure:

```
lift Id P x = P x
lift (K A) P a = ⊤
lift (F × G) P (x,y) = (lift F P x) × (lift F P y)
lift (F + G) P (inl x) = lift F P x
lift (F + G) P (inr y) = lift G P y
```

Note there are two cases for `F + G`

depending on which component the `(F + G) µH`

argument is in. It is easy to see that `lift F`

is also a functor, so it has an action on morphisms between `µF → Set`

objects.

*Aside* Note that this definition doesn't depend on the type of `P`

being `µ F → Set`

. We can actually give `lift F`

the type `(A → Set) → (F A → Set)`

for any `A`

. So we can extend `lift F`

to be a functor on the category `Fam(Set)`

, where objects are pairs `(A : Set, P : A → Set)`

. This extra generality helps when proving that structural induction is derivable from structural recursion, and for further applications like doing refinement of inductive types. Logical relations lovers will also see that this is the definition of the unary logical relation derived from the type scheme generated by a polynomial functor.

An important property of `lift F`

is that it is *truth-preserving*. For any `x : F µF`

, there is a (unique) value of `lift F (λy. ⊤) x`

. I read this as saying that `x : F µF`

completely determines the shape of `lift F (λy. ⊤) x`

. We can use this to derive a useful function, using the functorality of `lift F`

:

```
all : (P : µF → Set) →
(x : F µF) →
(k : (y : µF) → P y) →
lift F P x
```

Knowing that `lift F`

is truth preserving turns out to be just enough to prove that the `elimF`

rule above holds, and also that the following equation holds:

```
elimF P h (in x) = h x (all P x (elimF P h))
```

Spelling this out for the case of lists, it is pretty easy to see that this gives us back the original elimination rule for lists. It is also true that `elimF P h : (x : µF) → P x`

is the *unique* function that satisfies this equation, at least in the categorical setting. See Neil, Patty and Clem's paper for the category-theoretic proof that this works. The proof only demands that we have an initial algebra for `F`

and a truth preserving lifting. As a hint of the proof, note that if we regard `lift F`

as an endofunctor on `Fam(Set)`

then the premises of the `elimF`

function amount to taking a `(lift F)`

-algebra with carrier `(µF, P)`

to a morphism `(x : µF) → K₁ µF x → P x`

, where `K₁ µF x = ⊤`

. The proof consists of showing that `K₁ µF`

is the carrier of the initial `(lift F)`

-algebra.

##### Generalising to all `F`

I have only defined `lift F`

for polynomial `F`

so far, but it is possible to characterise `lift F`

for any arbitrary `F`

, as hinted by Hermida and Jacobs and fleshed out by Neil, Patty and Clem. Assuming we had extensional equality, we could define `lift F`

as:

```
lift F P x = Σy : F (Σz : µF. P z). F π₁ y = x
```

where `Σx : A. B x`

denotes a dependent pair type, and `π₁`

is the first projection. It is fairly easy to see that this is always truth-preserving, and that it coincides with the definition above for polynomial functors. This is particularly nice because it gives an extensional (i.e. we don't care about how `F`

is constructed) definition of what the lifting is. It is also possible to give a generic description that works in any fibration with the right structure, so we can generalise to indexed types, types over categories of domains and so on.

When actually implementing this in a intensional type theory, like in Epigram 2 or Foveran, it is better, both because of the lack of extensional quality and for pragmatic reasons, to define the lifting directly in terms of the descriptions of the functors. However, the extensional definition does give a guide as to the form that `lift F`

ought to take when we move beyond polynomial functors.

### Can we generalise Structural Induction?

My goal in the next post is to show that structural induction on the structure of an inductively defined type is only one form of structural induction. Generalising from the set up above, my current thought for what a "structural induction principle" for some `A : Set`

should look like consists of:

Another set

`B`

, which records the "sub-structure" of`A`

A constructor

`k : B → A`

A predicate transformer

`H : (A → Set) → (B → Set)`

A witness to truth-preservation

`tr : (b : B) → H (λa. ⊤) b`

An eliminator

`elim : (P : A → Set) → ((b : B) → H P b → P (k b)) → (a : A) → P a`

satisfying the equality

```
elim P h (k b) = h b (all P b (elim P h))
```

where `all`

is derived from the witness to truth-preservation.

I should say that I'm not totally fixed on the details. My current thought is that I might need multiple constructors, or that more structure of `H`

is required. But I'm pretty sure that an explicit description of how elements of the data type we are analysing are constructed is essential to any description of *structural* induction, because it allows us to state the computation rule. My plan is to push through examples and see what is required.

Previous work on this includes Nils Anders Danielsson's definition of Recursor in the Agda standard library, but this doesn't include a notion of constructor, or a computation rule. Conor also has a definition called `Below`

which is pretty much identical.

Obviously structural induction on an inductively defined type fits into this schema, with `B = F µF`

, and `H = lift F`

. It is not so obvious that more general ones do. In the next few posts I'll cover how to generate new structural induction principles from old ones. For those of you that want to read ahead you can look at an outdated implementation, using an old definition of structural induction principle in the Foveran repository.