You are viewing speleologic

Sep. 3rd, 2010

Constructive Completeness Proofs and Delimited Control

My PhD thesis has been approved for defense. You can find it here. Comments are welcome!

Chapter 1 gives a constructive version of a Henkin-style proof of Godel's completeness theorem. Chapter 2 introduces a notion of model similar to Kripke models using which one can give a normalization-by-evaluation style proof of completeness for classical logic. Chapter 3 does the same thing for intuitionistic logic (including disjunction and the existential). Finally, Chapter 4 presents an extension of intuitionistic logic with delimited control operators which still remains intuitionistic (preserves disjunction and existence properties) and can prove the Double Negation Shift and Markov's Principle.

Mar. 4th, 2010

Video lecture of Voevodsky on Type-theoretic foundations of mathematics

lists.seas.upenn.edu/pipermail/types-list/2010/001477.html

With criticism of Coq's type theory, but with praise of coq tactics :-)

Feb. 5th, 2010

A step-by-step proof of Open Induction by Bar Induction

The purpose of this post is a detailed proof, following Coquand[1], of the Open Induction Principle using Decidable-Bar Induction[2].

Open induction (OI) is useful when one can not do a proof by usual well-founded induction, that is, when there is no well-founded relation that can be used. In such cases, if the space one is in is compact, and the property one wants to prove is open, one can use a non-well-founded order to do an induction proof. From the computer science perspective, this means one can build a terminating recursive program for computing a property that does not depend on a structurally decreasing criterium of termination.

In this post we concentrate on Cantor space 2N, but it is, in some sense, enough, because every compact space is an image of Cantor space. A member α of 2N is a map N2, i.e. an infinite stream of bits, i.e. an infinite branch of the infinite binary tree. α<β if α is “left of” β as a branch of the infinite tree, that is, α<β if they are bit streams which are equal for a finite initial segment and then at some length n of the segment α(n) is “left” while β(n) is “right”. We write αn for the finite initial segment of α of lenght n. Each node in the infinite binary tree corresponds to some initial segment, and vice-versa. Each such node, i.e. each finite bit string, is called a basic open. An open, or open subspace, of Cantor space is an arbitrary union of basic opens. Equivalently, an open is determined by a Boolean function G on arbitrary lenght, but finite, bit strings, G : 2*2, which is monotone: If s, s′:2*, s is an initial segment of s′, and G(s)=⊤, then also G(s′)=⊤.

Axiom 1 (Decidable-Bar Induction)   For all unary predicates P and Q on 2* (finite binary strings), such that P is decidable and s. P sQ s, if
  ∀ α∈2N. ∃ n∈ NP(
α
n)
and
  ∀ s2*. (∀ b∈ 2Q ⟨ b::s ⟩) → Q s
then
  Q ⟨ ⟩
.
Theorem 2 (Open Induction Principle)   Let U be an open set of Cantor space 2N and < be a lexicographic ordering on 2N. If
∀ α. (∀ β<α. β∈ U) → α∈ U     (*)
then
  ∀ α. α∈ U.

Proof. Let α : 2N be given. U is open in 2N, hence given by a mapping G : 2*2 which is monotone w.r.t. initial segments: if s is a prefix of t, G s = ⊤ implies G t = ⊤. We say that G bars s with witness k (notation G|k s) if all k−bit-extensions of s satisfy G.

We define mutually, by (primitive) recursion on finite strings of numbers, two functions f:N*2* and acc:N*2:

    acc⟨ ⟩= ⊤ 
    acc⟨ 0::s ⟩acc(s)
    acc⟨ n+1::s ⟩acc(s) ∧ G|n ⟨ ⊥::f(s) ⟩ 
   
    f⟨ ⟩= ⟨ ⟩ 
    f⟨ 0::s ⟩= ⟨ ⊥::f(s) ⟩
    f⟨ n+1::s ⟩= ⟨ acc⟨ n+1::s ⟩::f(s) ⟩

We set

  P(s) := G(f(s))
  Q(s) := acc(s) → ∃ mG|m f(s)

and apply bar induction, which gives us a proof of Q(⟨ ⟩), which, since acc(⟨ ⟩)=⊤, gives an mN such that any bit-string with length m satisfies G. Hence, also αm, therefore α∈ U.

The hypotheses of (D-BI) are satisfied by the following lemmas.


Lemma 3   ∀ α∈NN. ∃ nN. G(f(αn))

Proof. We write f(α) for the natural exension of f to infinite sequences. The lemma states that ∀ α∈NN. f(α)∈ U. Let α be given. Apply (*). Let β<f(α), that is, there is some n such that βn = ⟨ ⊥::s ⟩ and ⟨ ⊤::s ⟩ = f(α) n = f(αn) for some s2* (of length n−1). By the defining equations of f and app, n>0, s=f(α(n−1)), acc(s) and G|k⟨ ⊥::s ⟩, that is, G|kβn, for some k. Hence, G(β(n+k)) = ⊤ i.e. β∈ U.


Lemma 4   For any sN*, if
(∀ nNacc⟨ n::s ⟩ → ∃ mG|m f⟨ n::s ⟩),
and acc(s), then n. G|n f(s).

Proof. Since acc(s), also acc⟨ 0::s ⟩, and the hypothesis gives us an m such that G|mf⟨ 0::s ⟩, that is, G|m⟨ ⊥::f(s) ⟩. This means that accm+1::s ⟩, which we use with the hypothesis again to get a k such that G|kfm+1::s ⟩ i.e. Gkaccm+1::s ⟩::f(s) ⟩ i.e. G|k⟨ ⊤::f(s) ⟩. We showed that G bars both ⟨ ⊤::f(s) ⟩ and ⟨ ⊥::f(s) ⟩, which means that G bars f(s) already: G|max(m,k)f(s).


Lemma 5   sN*. G(f(s)) → acc(s) → ∃ m. G|m f(s).

Proof. G(f(s)) iff G|0 f(s).


Remark 6  The given lemmas, and the functions f and acc, are more general than needed for (D-BI), since they take input of type N* instead of 2*.

References

[1]
Thierry Coquand. A note on the open induction principle, 1997.
[2]
Anne S. Troelstra and Dirk van Dalen. Constructivism in Mathematics: An Introduction I and II, volume 121, 123 of Studies in Logic and the Foundations of Mathematics. North-Holland, 1988.

Oct. 4th, 2009

Video Lectures: "Computer Verified Exact Analysis"

Very nice video lectures by Russell O'Connor and Bas Spitters from CCA 2009:

videolectures.net/aug09_spitters_oconnor_cvia/

The first lecture gives an overview of exact real number computation and the authors' starting points, and then goes on to talk about imperative versus monadic functional programming in general. The second one is a gentle introduction to the Coq proof system. The third lecture is on constructive analysis and the monadic structure of completion of metric spaces that the implementation in Coq uncovers.

Apr. 19th, 2009

Ishihara's Trick 1

Some time ago I came upon a mention of Ishihara's proof technique in constructive analysis and a claim:

In [6], Ishihara introduced the following two lemmas in which, surprisingly, we can use completeness to make a decision which at first sight would seem to be impossible with purely constructive techniques (that is, with intuitionistic, rather than classical, logic). (Bridges, van Dalen, Ishihara: Ishihara’s proof technique in constructive analysis)

One does not have to be interested in mathematical analysis to appreciate it, however, it is enough to think about a sequence of computation steps and a measure of "distance" between every two consecutive computation steps. If one can prove that for a sequence the distance gets smaller and smaller as time goes on we say that the computation converges or is Cauchy. When we consider these converging processes of computation as inhabitants of some universe X we can say that X is a complete metric space.

Here I want to give a simpler (more predicative) proof of Bridges-vanDalen-Ishiharas's Proposition 1, which more obviously gives rise to a program than theirs.

Proposition 1. If X is a complete metric space, X=P∪Q, (x_n) is a sequence in X that converges to x and we have ∀y:X, x#y ∨ y∉Q, then we have that (∀n. x_n ∈ P) or (∃ n. x_n ∈ Q).

The role of x, y as members of X is to stand for a sequence (x_n), (y_n) together with a proof that the sequences make progress as time progresses, i.e. they are Cauchy/converging. The relation '#' is called apartness and it means that the two computation processes are not 'the same'.

Proof. Start with a sequence (x_n). Define the binary sequence λ to be 0 as far as the elements of the sequence (x_n) are in P, and to become 1 forever as soon as one of the elements of (x_n) is in Q.

Now define the sequence (y_n) to be x as far as λ_n is 0 and to become x_k as soon as λ_n becomes 1, where k is the smallest natural number at which λ becomes 1, i.e. the k-th element of the sequence (x_n) is in Q. The sequence (y_n) we defined is very similar to a constant sequence: it is either x forever, or it is x for a finite number of starting elements and then it becomes x_k forever.

In any case (y_n) is Cauchy, so we can think of it as an entity y from the universe X. We now use the hypothesis given in Proposition 1 and look at two cases.
  1. if x#y then there is k, the smallest number such that λ_k = 1, which implies x_k ∈ Q
  2. if y∉Q then we can show that ¬∃n. λ_n = 1 and, since λ is decidable, ∀n. λ_n = 0, which implies ∀n. x_n ∈ P
QED

Apparently, in constructive analysis they use Proposition 1 (and 2, which was not shown) as constructive instances of the general law of excluded middle. Is it useful in logic and computer science?

Apr. 17th, 2009

Intuitionistic vs Classical proofs

A quote from an interview with Kohlenbach which makes me think things over:

A common view is that a (fully) constructive proof per se is superior (contains more information) than a prima facie classical argument. Experience from the unwinding of classical arguments, however, shows that usually only small parts of a proof contribute to the computational content of a proof while the bulk of the proof may well be proven ineffectively. Sometimes, using a more ineffective argument helps to push further parts of a proof into lemmas which are not needed to be analyzed at all. Replacing ineffective analytic or geometric steps in a proof by constructive induction arguments can result in bounds that are much less good than those extractable from the original classical proofs (see e.g. [7]). Often even ineffective parts in a proof which do matter for the computational content can be systematically transformed in such a way that computationally relevant information can be read off.

Indeed, sometimes it is not easy to see what the computational content of a constructive proof is, even when you spend a few months formalising it in a proof assistant yourself. The reason may well be that there is none.

Nov. 22nd, 2008

The fan "theorem"

The fan theorem is a theorem of Brouwer's original intuitionistic analysis, which follows from the principle of bar induction. In a classical setting it is also a theorem, thanks to the principle of excluded middle.

In constructive type theory it is not a theorem (yet), although by adding a principle of bar induction we can formally prove it. (see http://www.brics.dk/RS/98/39/BRICS-RS-98-39.ps.gz)

My interest in the fan theorem comes from some specific places where it appears, like the completeness theorem of Wim Veldman or giving type-theoretical explanation to backtracking. Hence I am interested in specific instances of it and their computational contents.

I was wondering what is the problem with the general formulation however and that is why I tried to express it in Coq, for the particular fan Cantor space. It boils down to giving a proof of the following:

B : (nat -> option bool) -> Prop
bB : forall a : nat -> bool, exists l : nat, B (IS a l)
mB : forall (a : nat -> bool) (l : nat), B (IS a l) -> B (IS a (S l))
============================
exists n : nat, forall a : nat -> bool, B (IS a n)

Nov. 13th, 2008

About forcing and monads

Forcing is a model theoretic technique used to prove that adding certain axioms to classical set theory does not produce (new) inconsistencies in the resulting theory; or that certain statements are neither provable nor disprovable from the axioms of set theory.

It was famously invented and used by Paul Cohen in 1962 for his proof of the independence of the axiom of choice and the continuum hypothesis from the axioms of Zermelo-Fraenkel.

By the completeness theorem for first-order logic (of which set theory is an instance), one can think of forcing as a procedure for transforming a deduction in a formal system to a deduction in another formal system. My doctoral thesis supervisor and I are interested in finding out what is the computational content of such a transformation.

In the book "Foundations of Constructive Mathematics", Michael Beeson devotes a chapter to forcing in which he translates the technique to a constructive setting in which no notion of a model is involved, thereby allowing to read his exposition as a procedure for transforming proofs.

His interest is in obtaining conservative extension results. More precisely, he gives a general methodology for showing a theory T to be a conservative extension of a theory S. He uses this methodology to give a new (we are talking about 1979 here) and conceptually clearer proof of Goodman's theorem which says that HA^ω+AC is a conservative extension of HA^ω (which is conservative over HA). HA is Heyting arithmetic, HA^ω is higher-order Heyting arithmetic and AC is the axiom of choice.

The method is to show that the following chain of interpretations is sound: HA^ω ---> HA^ω+AC ---> HA^ω. The first arrow is a "realisability interpretation" of HA^ω inside HA^ω+AC. The second one is a forcing translation.

I have not worked out the exact translation, because I would prefer to spend the time to work out a newer computational-content-result, namely the one of Berardi-Bezem-Coquand from "On the computational content of the axiom of choice", but I would like to approach answering a question posed by my supervisor:

Is forcing a monad?

The question is about monads as from programming language theory, not monads from category theory, because we are interested in computational content. And hence we are expected to provide:

1. a type constructor that encloses the computation going on inside the monad

2. a unit operation which creates the initial monad environment

3. a binding operation which composes consecutive computational steps inside the monad

I think that from Beeson's exposition in Chapter 15 of the mentioned book, these components are already identified and separated.

Namely, he begins with a constructive theory T from which he creates an extension, a theory called Ta, which contains an additional constant 'a' and some axioms concerning this constant. Now the point of the forcing transformation is to associate to each formula of Ta a formula of T.

He fixes a "definable" partial order (C,<) with a minimal element 0. Definable means that both C and < are "given by a formula of T, and T proves that (C,<) is a partial order with a minimal element 0".

The type C is also called "a set of conditions".

Now he specifies what forcing is supposed to look like, separately for atomic formulas and for composite ones. To make it shorter (but see below for a Coq specification), atomic formulas are expected to satisfy monotonicity and substitution, while composite formulas are expected to satisfy the usual forcing conditions. Beeson than gives a general result for any formula transformation which satisfies these conditions, and calls it a soundness theorem: if A is provable in Ta, then "forall p, exists q, q<-p -> q 'forces' A" is provable in T.

Going back to monads, I think that the given specification determines type-theoretically a class of monad types, that the composition operation of a particular monad type handles composite formulas, while the unit operation of a particular monad type "initializes" the monad with a translation for atomic formulas.

The two operations have to be given for each formal system separately and they are given for the concrete formal system Beeson is using inside his soundness theorem. (p. 348)

Here is a type-checked Coq definition of what seems to be a possible type of a forcing monad:

Program Fixpoint forces (f:formula)(p:C)(af:formula->C->Set) {measure depth f} : Type :=

match f with | (atom P t) => monotonicity af (atom P t) (* to add substitution *)

| (all A) => forall x:nat, forall q, ext q p -> {r:C & ext r q * forces (A^x) r af}

| (imp A B) => forall q, ext q p -> forces A q af -> {r:C & ext r q * forces B r af}

| bot => Empty_set end.
Two questions come to my mind at the moment:

1. Is this methodology useful to prove any conservativity result which concerns constructive type theory? (not set theory, or Heyting arithmetic)

2. Is it useful at least as a tool for "applied formal meta-theory"?

3. Does this make any sense?