Let be a pointed connected type, with base point , and let be the family of paths starting at , i.e. . The polynomial endofunctor is then given by

.

Now we observe that is a pointed type, with base point , where is the constant function. In other words, the unit type is a coalgebra for ! This implies that there is a unique homomorphism of coalgebras from to ,

and in particular that is again a pointed type. Let be the base point of . Then it also follows that , where is the constant function pointing to .

More generally, whenever is pointed type, so is . From the same argument, it follows that if is merely inhabited, then so is (and likewise for indexed coinductive types). Moreover, we get a fiber sequence

.

Now lets think about the identity type of . By the bisimilation theorem, it follows that for any , the identity type is equivalent to the coinductive type , which is the final coalgebra for the polynomial endofunctor associated to the indexed container

- ,
- ,
- ,
- .

From this description we see immediately that each is going to be a merely inhabited type, because each is merely inhabited. In other words, by the assumption that is (pointed and) connected, it follows that is (pointed and) connected.

Also, since and is the constant map pointing to , we see that is the final coalgebra for the polynomial endofunctor

.

In other words, is the classifying type of , where is the constant family pointing to .

This gives some fun results in the case of the infinite real and complex projective spaces, where the loop space of is the final coalgebra for , and respectively. If people would be up for it, I’d love to try to get a better feeling for what those spaces are.

]]>

This fact about the type of 2-element types was the starting point of the definition of the real projective spaces, which I did with Ulrik Buchholtz, and we hope to publish this work soon on the arxiv. I place this post in the ‘folklore’ category, because it is basically a rephrasing of known facts, but this is joint work with Ulrik.

Recall that we have a type family , where is the canonical type with terms. The **type of n-element types** is then the subtype of the universe. Note that by the univalence axiom it follows immediately that the loop space of the type of n-element types is equivalent to the type , which is a set that comes with the group structure of the symmetric group .

So, we already know the loop space of the type of 2-element types: it is (and of course it is the same as ). Nevertheless, by characterizing the type for any 2-element type , we get a bit more information, and we shall see as an application at the end that we can establish the type of 2-element types as a certain classifier.

To characterize the identity type , we have to do three things: (1) we need to give a type family ; (2) we need to give a point ; and (3) we need to show that the total space of is contractible. Since we have suggested already that to give an equivalence it is enough to tell where is mapped to, we will take given by . This has the point . Note that the total space of is just the type of pointed 2-element types! Its contractibility is the following theorem.

**Theorem 1.** *The type of pointed 2-element types is contractible.*

*Proof. *We take as the center of contraction. We need to define an identification of type , for any and . Let and . It is a standard fact that we have an equivalence of type

.

Hence we can complete the proof by constructing a term of type .

It is time for a little trick. Instead of constructing a term the type in the above type, we will show that this type is contractible. Since being contractible is a mere proposition, this allows us to eliminate the assumption into the assumption . Note that the endpoint of is free. Therefore we eliminate into . Thus, we see that it suffices to show that the type

is contractible for any . This can be done by case analysis on . Since we have the equivalence which swaps and , it follows that is contractible if and only if is contractible. Therefore, we only need to show that the type

is contractible. For the center of contraction we take . It remains to construct a term of type

.

Let and . Then we have an equivalence of type

Hence it suffices to construct a term of the type on the right hand side. We define a homotopy by case analysis: we take . To define , note that the type is contractible. Therefore, we have a center of contraction . Recall that equality on is decidable, so we have a term of type . Since , it follows that . Thus, we get , which we use to define . QED

By the above theorem, we have characterized the identity type of . Let’s summarize the result:

**Corollary 2.** *The canonical fiberwise map of type that is determined by mapping to , is a fiberwise equivalence. *

As an application, we show that the map pointing to classifies the double covers. By this, we mean the following:

**Theorem 3.*** For any double cover , the square*

*commutes, and is a pullback.*

*Proof.* Since for any , we have corollary 2 a fiberwise equivalence

.

indexed by . Hence it follows that the induced map of total spaces is an equivalence. Then, it also follows that the diagram

commutes. Since the inner square is a pullback square, it follows that the outer square is a pullback square.

]]>

It begins by observing that a fiberwise map is a fiberwise equivalence (i.e. each is an equivalence) precisely when it gives an equivalence on total spaces. The map is defined by

.

**Theorem 1.** *Let be a fiberwise map from to . Then the following are equivalent:*

*Each is an equivalence,**The map is an equivalence.*

The reason is basically that the fiber at is equivalent to the fiber . Thus, all the fibers of are contractible precisely when all the fibers of each are contractible.

Now lets focus on characterizing the identity types. Recall that for any type with , the type is contractible. At the same time, the type of identifications starting at is the least ‘reflexive type family’ on . By this, we mean that for any with , there is a (unique) fiberwise map which maps to . By Theorem 1 above, it follows that this is fiberwise an equivalence precisely when is contractible. Thus, we have:

**Theorem 2.** *Let be a type with , and let be a type family with . Then the following are equivalent:*

*The (unique) fiberwise map that is determined by , is a fiberwise equivalence.**The total space is contractible.*

This is how we often prove that a particular, well-chosen type family on a given pointed type is the identity type. To obtain what is called the encode-decode method, one may further break down the contractibility condition. Nevertheless, Theorem 3 is useful by itself, because in lots of situations there are other means of showing that the total space is contractible. Therefore, this theorem has been for me the most useful tool to in characterizing the identity type of a type.

Also, a slight generalization of Theorem 1 (which I use pretty often) arises when we take a map , and a fiberwise map , i.e. a fiberwise map over . Now we can define by

.

**Theorem 3.** *Let , and let . Then the following are equivalent:*

*Each is an equivalence,**The square*

*is a pullback square.*

]]>

Let us first explicitly describe the transposed family . First, the family over is given on vertices by . Then, for an edge in and and , we have the type of edges.

Note that the type is contractible. In other words, the relation is functional. Similarly, since transport over is an equivalence, the relation is also functional in the opposite direction, i.e. as a function from to .

A relation which is functional is the same thing as a function from to , and a relation which is functional in both directions is the same thing as an equivalence from to .

Thus, given a family of graphs over , we can ask whether each of the relations is either (i) functional, or (ii) functional in the opposite direction, or (iii) functional in both directions. These conditions on a family of graphs over are conditions of cartesianness, for they are (respectively) equivalent to (i) the left square being a pullback, (ii) the right square being a pullback, or (iii) both squares being pullback, in the following diagram

We see that we get four notions of type dependency on a graph. We have the original notion of a dependent graph, and by putting conditions of cartesianness in play we get three more:

**(i) Diagrams.** A (covariant) diagram over a graph consists of a type family , and an indexed family of functions . A term of a diagram over consists of a section , in which is compatible with the edges by . Note that the type of all terms of a diagram can be given the structure of a cone on , and indeed this is the limiting cone for , as we saw in my earlier post about coinductive types.

**(ii) Contravariant Diagrams.** A contravariant diagram over a graph consists of a type family , and an indexed family of functions . A term of a contravariant diagram over consists of a section , in which is compatible with the edges by .

**(iii) Equifibered Families.** An equifibered family over a graph consists of a type family , and an indexed family of equivalences . A term of an equifibered family over consists of a section , in which is compatible with the edges by .

Using these varying notions of families, we can nicely detect the variances of the different type operations. For instance, if is a contravariant diagram over , and is a (covariant) diagram over , then is again a (covariant) diagram over . If was contravariant and covariant, then would be contravariant. Similarly for the W-graphs. If is contravariant and is covariant, then is contravariant, while if is covariant and contravariant, then is covariant. When you try to take where both and are covariant, you still get a family of graphs, but it will be a general one.

On the other hand, the model in which the families are the equifibered families is a full model of type theory with dependent function types and W-types. Moreover, an equifibered family over a graph prescribes precisely the descent data that is needed to get a type family over . Indeed, the descent theorem states that the type of [type families over ] is equivalent to the type of [equifibered families over ]. Even more: the descent theorem can be extended to a ‘slice-wise’ equivalence between models. In other words, locally the univalent universe is as a model of type theory precisely the same thing as the graph model with equifibered families.

We see that, even if we attempt to formalize just several aspects of the graph model of type theory in order to study higher inductive types, we run into various different models with varying amounts of structure beyond type dependency. The situation really calls for us to also formalize an abstract notion of model in homotopy type theory of which all the models we have encountered so far, including the univalent universe, are instances. This should be useful even if we cannot find a completely satisfactory notion of model (because it might be lacking higher coherences), just because it will force us to be formal about what we mean by having implemented a model of type theory as a structure of unrestricted truncation level.

**Previous posts about formalizing the graph model:
**Formalizing the graph model of type theory, part 1

Formalizing the graph model of type theory, part 2

]]>

Recall that a graph in homotopy type theory consists of a type , and a binary type-valued relation . We can use this data to describe a higher inductive type , in which the point constructors come from , and the path constructors come from . More precisely, there will be a graph morphism , where the data of such a morphism corresponds precisely to the data of the point and the path constructor of . Here, given a type , the graph is defined as .

The ‘universal property’ of higher inductive types will be their induction principle. In an induction principle, one specifies what data they need to obtain a section of a type family. So it is about families and sections, as is everything in type theory. I will state the induction principle in a way that is also going to fit nicely with my formalization of the graph model.

Let be a family over . Recall that, since is a morphism of models of type theory, it has an action on families. More precisely, we have a family of graphs over . We can now substitute in to obtain a family over , which we may call the transpose of . The induction principle asserts that we get a section of from a section of .

To state the computation rule (which always comes with the induction principle), we can play the same game. First we observe that we can transpose any section of , using the action on terms of , so that it becomes a section of . Suppose the induction principle gave us the section of , from a section of . The computation rule now says that we have an equality of sections of .

For our purposes it does not matter so much whether this equality is strict or typal. In fact, I tend to take it to be a typal equality, but in formalization projects it is helpful if the equality is strict at least on the level of vertices. Nevertheless, the typal computation rule suffices to show that the type is determined uniquely up to equivalence. In a univalent world this means that the type of all types with a morphism satisfying an induction principle and a typal computation rule, is a mere proposition. In still other words, the ‘left adjoint’ to specified in this way, is unique once it exists.

As a final note, we observe that our description of the graph quotient, i.e. the colimit of a graph, was given in a way that doesn’t depend so much on the internals of the graph model. We could as well have taken a different model, such as the reflexive graph model. The induction principle of the reflexive graph quotient can be stated in exactly the same way.

**The previous post about the formalization of the graph model:
**Formalizing the graph model of type theory, part 1

]]>

Now the talk has been turned into an article, and it is published by the Bulletin of the AMS. Each of the five stages apart from ‘depression’ is illustrated by mathematical theorems that a mathematician might use to justify their being in that stage. And the ‘depression’ stage… well, just read it for yourself! There are juicy quotes, it is mostly non-technical, and I found it delightful and funny. Also, it is freely available, so I can highly recommend it!

]]>

**Example -2 (The trivial relation). **The trivial relation relates everything in a unique way. More precisely, the trivial relation on a type is defined as . The quotient of modulo the trivial relation is just the propositional truncation.

**Example -1 (Prop-valued equivalence relations).** A Prop-valued equivalence relation on a type is simply a binary relation which is reflexive, symmetric and transitive in the usual sense. Since the relation takes values in the (small) mere propositions, we don’t need to impose any further conditions. The quotient operation for Prop-valued equivalence relations is the set-quotient (although, the set-quotients are usually only called so if the underlying type is also a set, but this doesn’t matter at all).

**Example 0 (Pre-1-groupoid structure). **A pre-1-groupoid structure on a type consists of a category for which the type of objects is equal to , and in which every morphism is invertible. A pre-1-groupoid can be made into a 1-groupoid by `Rezk-completing’ it. This process is described in detail in Chapter 9 of the HoTT book.

Given a notion of ∞-equivalence relation, the ‘pre-n-groupoid structures’ should correspond precisely to the ∞-equivalence relations taking values in the subuniverse of (n-1)-truncated types. Moreover, quotienting operation for the ∞-equivalence relations should correspond to the quotienting operation for pre-n-groupoid structures, while it is completely indifferent about truncation levels. Luckily, it is not too hard to imagine how that would go. Note that in each case (both the finite truncation levels and the unrestricted level), the quotienting operation is a version of Rezk-completion. More precisely, the quotient is the image of (the action on objects of) the Yoneda-embedding.

]]>

Presumably, any relation which possesses the extra structure of your notion of an equivalence relation is going to be at least reflexive in an obvious way. So, we suppose you’ve got a structure which takes a reflexive relation on and returns the structure you’ve come up with.

The key to knowing whether is any good, is to compare it with the surjective maps that go out from . After all, at some point you will define a suitable quotient type together with a *surjective* quotient map . Let us write for the type of all surjective maps out of into some small type, i.e.

.

Recall that is defined as .

To any map , not necessarily a surjective one, we can associate a reflexive relation on , by substituting in the identity type on . More precisely, we define a map , by taking to be the reflexive relation on consisting of the binary relation and the proof of reflexivity. We might call the `pre-kernel’ of , since morally it is the kernel but it lacks an explicit structure of an equivalence relation. For all we know, it might possess this structure in many different ways. There are two things that need to be done:

**Lift the pre-kernel. ***The first thing you need to do is to lift to an operation as indicated in the diagram*

*In other words, every pre-kernel needs to be given the structure of an equivalence relation so that it becomes a kernel.*

**Find an inverse to the kernel operation. ***Second, you need to show that is an equivalence.*

Just to spell out what the second requirement means, it involves finding for every equivalence relation on , the quotient and a surjective map . This gives the inverse of . Then, to show that , you need to show that there is a commuting triangle

in which the bottom map is an equivalence. Finally, the quotenting operation needs to be shown effective, i.e. it needs to be shown that . This involves first showing that, for any equivalence relation there is a fiberwise equivalence

preserving reflexivity. This determines a path . To complete the proof of effectiveness, it needs to be shown that . In other words, that the canonical structure of being an equivalence relation that the pre-kernel possesses, agrees with the assumed structure , that is an equivalence relation.

There are already at least two notions of ∞-equivalence relations that satisfy the two requirements that I described above:

- The first one is completely uninformative, but it is out there anyway. We can take to be just . Since for any map, the total space of the fibers is equivalent to the domain, this satisfies the two requirements for completely general reasons. We learned nothing from this example.
- The second example is that of `principal equivalence relations’. I hope to write about them soon. For now, I will just mentioned that I’ve given a presentation about them at the Workshop on Univalent Foundations and Categorical Logic, in Leeds. If you are curious you can already have a look at my slides.

Neither of these examples is completely satisfactory though. In both cases, the types is a large type, and we need the univalence axiom to extend to . I believe that ultimately, the type should be a small type of which the formulation does not rely on the univalence axiom, and for which can be lifted to without univalence. Moreover, in the scenario where that is possible it would be nice that the assertion that is an equivalence for each type , is itself equivalent to the univalence axiom. I hope that something like that would be possible in the future.

]]>

Recall that a reflexive graph in homotopy type theory is a triple consisting of a type of vertices, a binary type-valued relation of edges, and a proof of reflexivity . A morphism of reflexive graphs is similarly a triple consisting of a action on vertices , an action on edges , with witnessing that reflexivity is preserved.

For any type we have the discrete reflexive graph . The operation can be made into a morphism of internal models, and this blog post is about the `left adjoint’ to this morphism of internal model: the reflexive graph quotients. Since we intend to be the left adjoint to , we can introduce it accordingly. The type is going to be a higher inductive type, and the constructors of are going to assemble the unit of our adjunction. Thus, we have a morphism of reflexive graphs. The induction principle for will tell us how to construct a section of a dependent type . In other words, it will tell us how to construct a lift

Since we want an adjunction, it should suffice to find a lift in the transposed diagram

Thus, we formulate the induction principle for as follows: to construct a section of , it suffices to construct a lift of in the reflexive graph model, along the indicated projection. The computation rule then says that the resulting section will satisfy . It turns out that these requirements suffice to prove the desired universal property for . On to the theorem!

**Theorem 1.*** Let be a reflexive graph with reflexive graph quotient . Then the square*

*is a pushout square.*

This theorem has been formalized in Coq by Simon Boulier while I visited him and Nicolas Tabareau, as part of our ongoing project on a formulation of -equivalence relations. The proof is by directly comparing cocones in the two situations. An immediate corollary is that the reflexive graph quotient of the indiscrete graph is the join , since the total space of the binary relation is just .

Recall that Van Doorn uses the indiscrete (non-reflexive) graph to construct the propositional truncation. More precisely, he shows that the sequential colimit of operations is the propositional truncation. Similarly, Boulier uses the indiscrete reflexive graphs. We can now compare these constructions to the `join construction’ of the propositional truncation, where the propositional truncation is constructed as the colimit of operations of iterated join powers of a type with itself. Let us consider the subsequence , which has the same colimit. Since the join is an associative operation, it follows that the . By our observation that , it follows that this sequence is just the sequence . In other words, Boulier’s approximating sequence of the propositional truncation appears as a subsequence of the approximating sequence that we get from the join-powers.

There one more observation that I’d like to make. As we’ve mentioned, the reflexive graph quotient is left adjoint to the discrete functor . There is also a right adjoint to , which just returns the type of vertices of a reflexive graph. Furthermore, the indiscrete functor is right adjoint. Thus, in the setting of graphs we have obtained the reflective subuniverse of mere propositions by iteratively composing the outer two of these adjoints: the right-most () followed by the left-most (). This leaves us with the question whether we get something similar in other cohesive (-)toposes. That’s a question I’m currently working on with Jonas Frey. I’d love to hear insights from people who are familiar with cohesive -toposes.

]]>

The most basic example of a coinductive type is the type of streams in a given type . A stream in can be thought of as an infinite string of terms of , indexed by the natural numbers. There are two basic operations one can perform on streams, that characterize what streams are: one can take the head of a stream, which is the term at position 0, or one can take the tail of a stream, which is the stream one gets by removing the term at position 0. Thus, we have a map . In fact, this map is an equivalence.

Now note that we have an endofunctor given by , which can be thought of as a linear polynomial. A coalgebra for is an object together with a map , which is in our case a type together with a map . Thus, we see that the type of streams is a coalgebra for the endofunctor . Indeed, it is defined as the final coalgebra (i.e. a terminal object in the category of coalgebras for ).

To model general coinductive types, we have to talk about containers. A (small) container is just a type together with a type family . Each container determines a polynomial endofunctor by . The coinductive type associated to the container is then defined to be the final coalgebra for . One expresses the condition of being final in HoTT, simply by saying that for each coalgebra of , the type of coalgebra homomorphisms from to is contractible. Under this condition, one can show that the map is an equivalence.

The immediate question is then whether such final coalgebras always exist in the setting of homotopy type theory, and this is indeed the case. Ahrens, Capriotti and Spadotti have formalized a construction of final coalgebras, as the limit of a simple diagram obtained by iteratively applying to the unit type :

We denote the maps in this diagram by .

For future reference, it will be useful to have an explicit description of the coalgebra structure of . That is, we want to give a precise definition of the map . Note that decomposes into a map and a dependent function . Let , where , and where .

- First, we define .
- The construction of is a bit more involved. We will construct the map by constructing a cone on the defining diagram of . This cone is obtained by constructing a natural transformation of type sequences

in which the vertical maps are given by , and the horizontal maps on the top row are required to be equivalences. We obtain the commuting squares by generalization. Note that for each , we have and such that . More generally, suppose we have and such that . We want to show that the diagram

commutes, where the top map is an equivalence. Since the endpoint of the path is free, we may eliminate it into . Also, note that . Thus, we want to make a commuting diagram of the form

which is straightforward, since we may take the upper map to be the identity map.

This finishes the construction of . Before we continue, let us review a bit of the theory of limits in HoTT.

Recall from the previous post, that a graph consists of a type of vertices and a binary, type-valued relation of edges. A diagram over the graph then consists of a type family , and an indexed family of maps . Any type determines a diagram on . A cone on with vertex is then simply a morphism of diagram . By functoriality of , we get for each a map , and pre-composing by gives an operation from cones with vertex to cones with vertex . A cone with vertex is said to be limiting if for each , this precomposition map is an equivalence. A limiting cone always exists, since we can take the type to be the type of pairs , where

.

Of course, the proof that this cone is limiting uses the principle of function extensionality, but that is basically all there is to it. Now we can also easily compute the identity type of the limit:

**Lemma 1.** *For any , the identity type is equivalent to the limit of the diagram over , defined by*

.

Using the previous lemma, we can characterize the identity type of the coinductive type in a useful way:

**Theorem 2.** *Let be an indexed container, and let . Then the following are equivalent types:*

*The type .**The type of pairs consisting of and . Here and are the first and second component of the map .**The limit of the type sequence*

*,*

*where the maps are given by .*

The last description of the identity type looks suspiciously much like the sequence that Ahrens, Capriotti and Spadotti used to define the coinductive types. That is precisely what we’re after: a description of the identity type of a coinductive type as a certain other coinductive type, the bisimilation relation! However, the bisimilation relation is not just an ordinary coinductive type of the kind we described above. It is rather an indexed coinductive type, so we need indexed containers. It is a bit of extra machinery, but it will be worth it!

An indexed container consists of a type , a type family , a further type family , and a re-indexing function . Just like ordinary containers, an indexed container determines a polynomial endofunctor acting on the category . It is defined by

.

Again, the functorial action of is given by precomposition. As with ordinary containters, one can construct the final coalgebra for as the limit of the type sequence

.

Here, indicates the constant type family over with value .

Our goal is to define an indexed container that defines the bisimilation relation as a coinductive type. Note that we can basically read off from the second equivalent description of the identity type of in Theorem 2 what and should be:

- For the indexing type we take .
- We define .
- We define .
- We define the re-indexing function by .

Now let us compute the action of the polynomial functor explicitly. We have

.

Note that the third description in Theorem 2 already states that the type family is a coalgebra for latex . Now it is pretty straightforward to show that it is also final. What needs to be done is to construct a natural equivalence of the defining type sequence of , and the type sequence of Theorem 2. In other words, one has to show that we have a diagram of the form

**Update 10/17/2016:** Included an explicit description of the coalgebra structure of .

]]>