From owner-qed Tue Oct 25 10:57:59 1994
Received: from localhost (listserv@localhost) by antares.mcs.anl.gov (8.6.4/8.6.4) id KAA19088 for qed-out; Tue, 25 Oct 1994 10:57:02 -0500
Received: from life.ai.mit.edu (life.ai.mit.edu [128.52.32.80]) by antares.mcs.anl.gov (8.6.4/8.6.4) with SMTP id KAA19083 for ; Tue, 25 Oct 1994 10:56:51 -0500
Received: from spock (spock.ai.mit.edu) by life.ai.mit.edu (4.1/AI-4.10) for qed@mcs.anl.gov id AB25950; Tue, 25 Oct 94 11:56:29 EDT
From: dam@ai.mit.edu (David McAllester)
Received: by spock (4.1/AI-4.10) id AA12499; Tue, 25 Oct 94 11:56:28 EDT
Date: Tue, 25 Oct 94 11:56:28 EDT
Message-Id: <9410251556.AA12499@spock>
To: dahn@mathematik.hu-berlin.de, piotr@cs.ualberta.ca
Cc: qed@mcs.anl.gov
In-Reply-To: Dahn's message of Tue, 25 Oct 94 13:53:51 +0100 <9410251253.AA03500@mathematik.hu-berlin.de>
Subject: PRA as a foundation
Sender: owner-qed@mcs.anl.gov
Precedence: bulk
I had an off-line discussion with Bob Boyer and got a much clearer picture
of the role of PRA in QED. I thought that using PRA as a central foundation would
force intersystem translations to produce metatheorems rather than direct "Platonic"
thoerems (theorems that quantify directly over arbitraray sets). It turns out that
direct translations are consistent with the use of PRA. However, I am still skeptical
about PRA.
Here is a longer explanation of my current understanding. PRA seems
motivated by a series of reasonable objectives.
Objective 1. The QED project should keep track of what axioms are used for each theorem
proved.
I think this objective is reasonable but arguable. I am not convinced that
careful accounting of axioms is needed since most mathematicians aren't
going to care about the details of the foundation --- any sufficiently rich
foundation will be equivalent to ZFC for all practical purposes in "ordinary" mathematics.
However, assuming that we do want to keep track of what principles are used
for any given theorem then the QED data base should consist of a set of
pairs of theorems and systems --- a theorem and a name of the system in which
it can be proved.
Objective 2. We want to build and use translators between the logics of the various systems
so that each system can use results proved by other systems in constructing still more
proofs.
This seems reasonable. However, if a single system captures the
hearts and minds of mathematicians then the logic of that system could
become a defacto standard and translation could become irrelevant. No
matter how strong our cooperative tendencies are, I think that
ultimately the nature of machine verification will be decided in a
free market fasion with mathematicians playing the role of consumers.
A single system will probably dominate.
Objective 3. Translators between systems should be formally verified.
This seems reasonable but arguable. I think an unverified translator is better than
no translator. Given the current state of the art in verification,
verified code useually requires an order of magnitude more time to construct.
There are two approaches to machine verifying intersystem translations --- syntactic
and semantic. One can either prove that the translations preserve provability
or that the translations preserve truth. Proving that the translations
preserve truth requires a logic in which the semantics of the two systems in question
can be defined. This is problematic because defining the semantics of a system
requires a stronger system. For example the semantics of ZFC can not be defined in ZFC.
A natural way around this is to make a stronger system by assuming the existence of
large cardinals. A system for verifying that translations preserve truth would have
to assume cardinals larger than those assumed by any of the systems being translated.
I find this perfectly acceptable and the easiest approach
to verifying translations since working semantically is often more efficient
(gives shorter proofs). But the semantic approach to verifying translators
is in conflict with the following objective.
Objective 4. The logic in which translations are verified should be as
primitive as possible so as to be acceptable to the widest possible variety
of logicians.
Objective 4 seems to dictate a syntactic approach to verifying
translators. If f is a translator from system S1 to system S2 the
correctness theorem states that if S1 |- Phi then S2 |- f(Phi). I was
confused about the role of PRA here and assumed that a central use of
PRA would force the translation f(Phi) to be a "meta" formula rather
than a "Platonic" formula (one that talks directly about arbitrary
sets). But nothing here is preventing f(Phi) from being a direct
(Platonic) set-theoretic formula. PRA seems adequate once we are
committed to a syntactic approach to verifying translators.
Objective 5. All pairs of a theorem and a system in the central data base should
be verified in a central uncontroversial logic (such as PRA).
This is the most controversial objective. Doing this greatly increases
the cost of constructing each individual system because the systems
must now output PRA proofs of metaformulas of the form S |- Phi.
Every decision procedure used in the system must be built to construct
a trace of its execution that can be converted to a PRA proof. It
also dictates a syntactic approach to proving the correctness of the
translators. In my opinion it greatly increases the cost of QED.
I think cost (person years) is going to be the main obstical.
David