From qed-owner Sun Apr 18 20:33:55 1993
Received: by antares.mcs.anl.gov id AA05303
(5.65c/IDA-1.4.4 for qed-outgoing); Sun, 18 Apr 1993 20:31:40 -0500
Received: from donner.mcs.anl.gov by antares.mcs.anl.gov with SMTP id AA05296
(5.65c/IDA-1.4.4 for ); Sun, 18 Apr 1993 20:31:39 -0500
Message-Id: <199304190131.AA05296@antares.mcs.anl.gov>
To: qed@mcs.anl.gov
Subject: archiving of qed mail
Date: Sun, 18 Apr 1993 20:31:37 -0500
From: Rusty Lusk
Sender: qed-owner
Precedence: bulk
I have created a subdirectory of pub/qed at info.mcs.anl.gov called archive,
were I will attempt to archive the qed mail, so that anyone can catch up, and
no one need archive it locally. Currently the messages are in files named
1...27, and they are catted together into the file archive1-27. So you can
retrieve the discussion so far (minus a few of the very earliest postings) by
anonymous ftp to info.mcs.anl.gov, directory pub/qed, file archive1-27. Let
me know if you have any problems.
Rusty Lusk
From jt@linus.mitre.org Sun Apr 18 21:54:42 1993
Received: from linus.mitre.org by antares.mcs.anl.gov with SMTP id AA06230
(5.65c/IDA-1.4.4 for ); Sun, 18 Apr 1993 21:54:38 -0500
Return-Path:
Received: from malabar.mitre.org by linus.mitre.org (5.61/RCF-4S)
id AA28080; Sun, 18 Apr 93 22:54:36 -0400
Posted-Date: Sun, 18 Apr 93 22:54:34 -0400
Received: by malabar.mitre.org (5.61/RCF-4C)
id AA04587; Sun, 18 Apr 93 22:54:34 -0400
Date: Sun, 18 Apr 93 22:54:34 -0400
From: jt@linus.mitre.org
Message-Id: <9304190254.AA04587@malabar.mitre.org>
To: qed-owner@mcs.anl.gov
Subject: support
Robert Boyer states:
It seems to me crucial that if the QED project is to have the support
of the now very wide constructivist community, then underlying QED
must be some such logic to which everyone can agree.
In the current climate of severely restricted research funding,
isn't the following also true and perhaps more relevant?
If the QED project is to have the support of the scientific community
as a whole, then it should have goals that can be stated in ways
which people outside the field would find useful and worth funding.
In other words, if we want to get this thing going, shouldn't we look
outwards?
Javier Thayer
From qed-owner Mon Apr 19 12:14:46 1993
Received: by antares.mcs.anl.gov id AA19238
(5.65c/IDA-1.4.4 for qed-outgoing); Mon, 19 Apr 1993 11:58:58 -0500
Received: from swan.cl.cam.ac.uk by antares.mcs.anl.gov with SMTP id AA19226
(5.65c/IDA-1.4.4 for ); Mon, 19 Apr 1993 11:58:34 -0500
Received: from dunlin.cl.cam.ac.uk (user lcp (rfc931)) by swan.cl.cam.ac.uk
with SMTP (PP-6.4) to cl; Mon, 19 Apr 1993 17:57:11 +0100
To: qed@mcs.anl.gov
Subject: little theories and encryption
Date: Mon, 19 Apr 93 17:57:03 +0100
From: Lawrence C Paulson
Message-Id: <"swan.cl.ca.644:19.04.93.16.57.14"@cl.cam.ac.uk>
Sender: qed-owner
Precedence: bulk
Without doubt we should attempt to include constructivists and other
philosophies of mathematics, and this does mean that the base logic will have
to be pretty simple. PRA is possibly too simple -- I would hate to give up
quantifiers. Isabelle uses intuitionistic higher-order logic (with implies,
forall and equality). This could be further weakened to be essentially
first-order, but admitting bound function variables in order to handle axiom
schemes and descriptions. The AUTOMATH-style type theories that are being
studied at the moment also deserve serious consideration.
Mathematics done in simple base logics is not encrypted at all. Isabelle
supports the illusion that you are working directly in the formalized logic,
be it ZF, HOL, Constructive Type Theory or what have you. AUTOMATH type
theories also support a natural proof style.
There is a problem, though: many proofs are essentially the same no matter
what sort of framework (constructive, classical, ...) that you are working in.
If we are to support several different schools of mathematics, then there is
the risk that the database will contain many duplicated proofs. This
certainly happens with Isabelle. Perhaps 'little theories' could prevent
this; each theorem would be proved in the weakest framework possible for that
particular theorem.
We need a combination of 'little theories' and a fairly weak base logic,
though perhaps stronger than PRA. Much work needs to be done towards relating
proofs done in different formal systems.
Larry Paulson
From qed-owner Tue Apr 20 10:15:17 1993
Received: by antares.mcs.anl.gov id AA15260
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 20 Apr 1993 10:05:23 -0500
Received: from life.ai.mit.edu by antares.mcs.anl.gov with SMTP id AA15251
(5.65c/IDA-1.4.4 for ); Tue, 20 Apr 1993 10:05:13 -0500
Received: from spock (spock.ai.mit.edu) by life.ai.mit.edu (4.1/AI-4.10) id AA06485; Tue, 20 Apr 93 11:04:52 EDT
From: dam@ai.mit.edu (David McAllester)
Received: by spock (4.1/AI-4.10) id AA01103; Tue, 20 Apr 93 11:04:50 EDT
Date: Tue, 20 Apr 93 11:04:50 EDT
Message-Id: <9304201504.AA01103@spock>
To: Larry.Paulson@cl.cam.ac.uk
Cc: qed@mcs.anl.gov
In-Reply-To: Lawrence C Paulson's message of Mon, 19 Apr 93 17:57:03 +0100 <"swan.cl.ca.644:19.04.93.16.57.14"@cl.cam.ac.uk>
Subject: Different Schools
Sender: qed-owner
Precedence: bulk
Mathematics done in simple base logics is not encrypted at all. Isabelle
supports the illusion that you are working directly in the formalized logic ...
ok, I believe this. I had not thought to use a metasystem such as Isabelle
as the base logic. (Is "metasystem" an acceptable term for Isabelle?)
There is a problem, though: ... If we are to support several different schools
of mathematics, then there is the risk that the database will contain many
duplicated proofs.
I will stop using the term "encrypted" and simply note that if the
data base uses different languages from different schools then we may
end up writing lots of translators. This is true even if each school
defines their logic in the same simple base logic (metasystem). A many
language approach might be ok. However there is the problem of
confidence (we must trust the methods of the different schools) and
the problem of intertranslation. It seems that the use of a base
logic as a metasystem does not avoid these problems.
It seems to me that there is still a case for defining a highly
expressive lingua franca. As I noted earlier, a lingua franca, such
as the language of set theory, need make no commitment to any
particular set of inference principles. The lingua franca could be
neutral as to whether reasoning should be constructive.
David
From qed-owner Tue Apr 20 10:42:04 1993
Received: by antares.mcs.anl.gov id AA16693
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 20 Apr 1993 10:30:00 -0500
Received: from swan.cl.cam.ac.uk by antares.mcs.anl.gov with SMTP id AA16681
(5.65c/IDA-1.4.4 for ); Tue, 20 Apr 1993 10:29:50 -0500
Received: from dunlin.cl.cam.ac.uk (user lcp (rfc931)) by swan.cl.cam.ac.uk
with SMTP (PP-6.5) to cl; Tue, 20 Apr 1993 16:29:23 +0100
To: qed@mcs.anl.gov
Subject: Different Schools
Date: Tue, 20 Apr 93 16:29:09 +0100
From: Lawrence C Paulson
Message-Id: <"swan.cl.cam.:080740:930420152927"@cl.cam.ac.uk>
Sender: qed-owner
Precedence: bulk
Translation (or interpretation) between different schools is one of the key
issues. We also need translation to import theories from other systems.
A common "lingua franca" could be valuable; it is another of the key issues.
Should this merely formalize assertions, or should it also capture aspects of
their proofs, like the "mathematical vernacular" (de Bruijn's concept)?
Making the "lingua franca" neutral will not be easy. The if-then-else
construct, descriptions and powersets are unacceptable in certain schools. I
also fear disagreement on whether the "lingua franca" should be typed or
untyped (I prefer the latter).
Larry Paulson
From qed-owner Tue Apr 20 11:09:33 1993
Received: by antares.mcs.anl.gov id AA17612
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 20 Apr 1993 11:03:55 -0500
Received: from linus.mitre.org by antares.mcs.anl.gov with SMTP id AA17598
(5.65c/IDA-1.4.4 for ); Tue, 20 Apr 1993 11:03:52 -0500
Received: from malabar.mitre.org by linus.mitre.org (5.61/RCF-4S)
id AA27035; Tue, 20 Apr 93 12:03:47 -0400
Posted-Date: Tue, 20 Apr 93 12:03:43 -0400
Received: by malabar.mitre.org (5.61/RCF-4C)
id AA05059; Tue, 20 Apr 93 12:03:43 -0400
Date: Tue, 20 Apr 93 12:03:43 -0400
From: jt@linus.mitre.org
Message-Id: <9304201603.AA05059@malabar.mitre.org>
To: qed@mcs.anl.gov
Subject: illusion
Cc: jt@linus.mitre.org
Sender: qed-owner
Precedence: bulk
I keep hearing the phrase "such and such supports the illusion
that.." Could someone explain what "supporting an illusion" means and
what advantages there are in maintaining these illusions. In
particular, is there some level at where the illusions aren't
maintained?
Javier Thayer
From qed-owner Tue Apr 20 11:54:27 1993
Received: by antares.mcs.anl.gov id AA18710
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 20 Apr 1993 11:48:32 -0500
Received: from swan.cl.cam.ac.uk by antares.mcs.anl.gov with SMTP id AA18668
(5.65c/IDA-1.4.4 for ); Tue, 20 Apr 1993 11:47:22 -0500
Received: from dunlin.cl.cam.ac.uk (user lcp (rfc931)) by swan.cl.cam.ac.uk
with SMTP (PP-6.5) to cl; Tue, 20 Apr 1993 17:43:49 +0100
To: qed@mcs.anl.gov
Subject: Re: illusion
In-Reply-To: Your message of Tue, 20 Apr 93 12:03:43 -0400. <9304201603.AA05059@malabar.mitre.org>
Date: Tue, 20 Apr 93 17:43:33 +0100
From: Lawrence C Paulson
Message-Id: <"swan.cl.cam.:097400:930420164359"@cl.cam.ac.uk>
Sender: qed-owner
Precedence: bulk
When I remarked "Isabelle supports the illusion that you are working directly
in the formalized logic", I meant that users could work without knowing how
their object-logic was encoded in Isabelle's meta-logic. The illusion is
maintained through many mechanisms, e.g. the parser/prettyprinter translates
the internal encodings into a readable notation.
The encodings are rather straightforward. They resemble the ones Church used
to encode higher-order logic into the typed lambda calculus. Church's
encodings are now generally accepted as the very definition of higher-order
logic.
It may seem dangerous to speak of illusions when we are after certainty; a
fault in the parser/prettyprinter could be nearly as harmful as a fault in the
inference mechanisms themselves. But illusions are a tool of our trade.
Inside the computer there is nothing but bit patterns. Every theorem prover
encodes its logic ultimately in terms of bits. Even programmers rely upon the
illusion that terms and formulae are present.
Larry Paulson
From qed-owner Tue Apr 20 12:15:33 1993
Received: by antares.mcs.anl.gov id AA19185
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 20 Apr 1993 12:07:13 -0500
Received: from chelm.cs.umass.edu by antares.mcs.anl.gov with SMTP id AA19178
(5.65c/IDA-1.4.4 for ); Tue, 20 Apr 1993 12:07:09 -0500
Received: by chelm.cs.umass.edu (5.65/DEC-Ultrix/4.3)
id AA29743; Tue, 20 Apr 1993 13:07:02 -0400
Date: Tue, 20 Apr 1993 13:07:02 -0400
From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
Message-Id: <9304201707.AA29743@chelm.cs.umass.edu>
To: qed@mcs.anl.gov
Subject: base language
Sender: qed-owner
Precedence: bulk
How about the standard language of algebra? Take Serge Lang's text,
set up a few syntactic rules, and be done with it. The meaning of a term
and even whether or not the term is syntactically valid could depend
on the specific subsystem. Let's not inflict the horrors of formal logic
notation or worse on people who want to prove theorems about something
other than logic itself.
One of the most appealing aspects of Goodstien/Skolem's pra is that
it integrates so easily into standard mathematical practice.
From qed-owner Tue Apr 20 12:32:55 1993
Received: by antares.mcs.anl.gov id AA19606
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 20 Apr 1993 12:24:38 -0500
Received: from linus.mitre.org by antares.mcs.anl.gov with SMTP id AA19599
(5.65c/IDA-1.4.4 for ); Tue, 20 Apr 1993 12:24:36 -0500
Received: from malabar.mitre.org by linus.mitre.org (5.61/RCF-4S)
id AA00496; Tue, 20 Apr 93 13:24:31 -0400
Posted-Date: Tue, 20 Apr 93 13:24:28 -0400
Received: by malabar.mitre.org (5.61/RCF-4C)
id AA05106; Tue, 20 Apr 93 13:24:28 -0400
Date: Tue, 20 Apr 93 13:24:28 -0400
From: jt@linus.mitre.org
Message-Id: <9304201724.AA05106@malabar.mitre.org>
To: qed@mcs.anl.gov
Subject: illusiions
Sender: qed-owner
Precedence: bulk
OK, I agree that illusions are "part of the trade." That is not my
objection. (Although please find a better expression than illusion.)
However, for a non-constructivist analyst such as myself, it seems
very convoluted to encode everything in an unfamiliar and unintuitive
(for me) logic. What is the point? To facilitate translatability? I
don't believe that this is possible in any way which I would find
acceptable. For instance, constructivists typically don't accept the
clasical intermediate value theorem for continuous functions. So a
translation of the classical intermediate value theorem will have no
intuitive content at all.
Javier
From qed-owner Tue Apr 20 15:28:12 1993
Received: by antares.mcs.anl.gov id AA25003
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 20 Apr 1993 15:24:38 -0500
Received: from cats.UCSC.EDU by antares.mcs.anl.gov with SMTP id AA24996
(5.65c/IDA-1.4.4 for ); Tue, 20 Apr 1993 15:24:35 -0500
Received: from si.UCSC.EDU by cats.UCSC.EDU with SMTP
id AA05266; Tue, 20 Apr 93 13:24:32 -0700
From: beeson@cats.UCSC.EDU
Received: by si.ucsc.edu (5.65/4.7) id AA20321; Tue, 20 Apr 93 13:24:30 -0700
Date: Tue, 20 Apr 93 13:24:30 -0700
Message-Id: <9304202024.AA20321@si.ucsc.edu>
To: qed@mcs.anl.gov
Subject: Machine Math
Sender: qed-owner
Precedence: bulk
I would like to make a distinction between two kinds of formal languages
needed for the QED project:
(1) a high-level language in which humans (and possibly machines)
write proofs and read proofs.
(2) a low-level language in which machines write proofs and check proofs
(and possible generate proofs).
These are analogous to high-level programming languages and assembly code.
In the early days of computing, there was only assembly code. Then
high-level languages and compilers were constructed. With regard to formal
proofs, we are still in those early days. I envision a high-level language,
which for want of a better name I am calling "Machine Math", which would
be approximately as hard to learn as a modern programming language; that
is, somewhat harder than TeX, but not much. Ordinary mathematicians
(who are not logicians or computer scientists) could learn it and write
their proofs in it. A "proof compiler", or many different compilers,
could be built to compile Machine Math into (various) low-level
formal languages, for which proof-checkers in turn can be built.
If anything like the scope of Bourbaki is contemplated, the
initial focus of the project must be on the design of Machine Math.
Computers should NOT BE INVOLVED in the first year of experimentation!
(That's a radical thing for me to say, because I've been up to my elbows
in computer code for the last seven years.)
If the project could successfully produce a specification for Machine
Math, and a large number of examples of proofs formalized in Machine Math,
then independent groups of researchers may build compilers into the
low-level logic of their choice, and build proof-checkers, independently
of one another. This strikes me as much more feasible than trying
to build a single computer system with the cooperation of hundreds of
researchers scattered around the globe.
Machine Math must support definitions, theorems, proofs, and
algorithms. One of the issues about which there will be much
discussion is, what are the allowed mechanisms for specifying an
algorithm? A tremendous variety of informal descriptions occurs in
the mathematical literature, and this is probably the most noticeable
place in which Machine Math proofs will have to be "more formal" than
journal proofs are now; algorithms will clearly have to be written in
some more formal way. We might want to consider some extension of
the Mathematica syntax (or a variant) to a more abstract language;
it would help that the Mathematica syntax is already fairly well-known.
Less obvious is the fact that Machine Math must support "theories".
Each piece of mathematics takes place in the context of (one or more)
theories; functions may have different definitions in different theories
for example. The use of theories will be a help, however, in organizing
a large database of mathematical knowledge.
I suggest the following organization of the QED effort:
1. Machine Math language design and specification.
2. Formalization of a sizeable body of mathematics in Machine Math.
3. Specification of low-order logic(s) as target language(s).
4. Construction of Machine Math compiler(s) into the target language(s).
5. Construction of proof-checkers for the target languages.
6. Construction of database software to manipulate a large body of
mathematics in Machine Math form.
From qed-owner Tue Apr 20 22:38:51 1993
Received: by antares.mcs.anl.gov id AA04125
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 20 Apr 1993 22:29:15 -0500
Received: from arp.anu.edu.au by antares.mcs.anl.gov with SMTP id AA04117
(5.65c/IDA-1.4.4 for ); Tue, 20 Apr 1993 22:29:04 -0500
Received: by arp.anu.edu.au id AA23983
(5.65c/IDA-1.4.2.9 for qed@mcs.anl.gov); Wed, 21 Apr 1993 13:28:40 +1000
Received: from Messages.7.15.N.CUILIB.3.45.SNAP.NOT.LINKED.arp.anu.edu.au.sun4.41
via MS.5.6.arp.anu.edu.au.sun4_41;
Wed, 21 Apr 1993 13:28:40 +1000 (EST)
Message-Id:
Date: Wed, 21 Apr 1993 13:28:40 +1000 (EST)
From: Zdzislaw Meglicki
To: qed@mcs.anl.gov
Subject: Machine Math
Sender: qed-owner
Precedence: bulk
In <9304202024.AA20321@si.ucsc.edu> beeson@cats.UCSC.EDU writes:
> I suggest the following organization of the QED effort:
> 1. Machine Math language design and specification.
> 2. Formalization of a sizeable body of mathematics in Machine Math.
> 3. Specification of low-order logic(s) as target language(s).
> 4. Construction of Machine Math compiler(s) into the target language(s).
> 5. Construction of proof-checkers for the target languages.
> 6. Construction of database software to manipulate a large body of
> mathematics in Machine Math form.
Why can't proof-checking itself be also carried out in Machine Math. If
the language is sufficiently precise to allow formulation of theorems
and proofs by human users it should be also sufficiently precise for
the machine to carry out reasoning in it. "Mathematica" which has been
quoted in this context, has its own high level language in which
reasoning can be expressed and carried out. In other words, why not to
replace points 3, 4, and 5 with a direct translation from Machine Math
to machine code?
Zdzislaw Gustav Meglicki, gustav@arp.anu.edu.au,
Automated Reasoning Program - CISR, and Plasma Theory Group - RSPhysS,
The Australian National University, G.P.O. Box 4, Canberra, A.C.T., 2601,
Australia, fax: (Australia)-6-249-0747, tel: (Australia)-6-249-0158
From qed-owner Wed Apr 21 02:09:25 1993
Received: by antares.mcs.anl.gov id AA07965
(5.65c/IDA-1.4.4 for qed-outgoing); Wed, 21 Apr 1993 02:04:41 -0500
Received: from Maui.CS.UCLA.EDU by antares.mcs.anl.gov with SMTP id AA07958
(5.65c/IDA-1.4.4 for ); Wed, 21 Apr 1993 02:04:38 -0500
Received: from LocalHost.cs.ucla.edu by maui.cs.ucla.edu
(Sendmail 5.61d+YP/3.21) id AA00136;
Wed, 21 Apr 93 00:04:34 -0700
Message-Id: <9304210704.AA00136@maui.cs.ucla.edu>
To: qed@mcs.anl.gov
Subject: Why should a mathematician be interested in QED?
Date: Wed, 21 Apr 93 00:04:32 PDT
From: chou@CS.UCLA.EDU
Sender: qed-owner
Precedence: bulk
I think the success or failure of the QED project depends crucially
on the answer to the following question:
Why should a mathematician be interested in QED?
I hope the mathematicians, rather than the computer scientists
who are interested in math (like myself), on this mailing list
can share their views with us.
- Ching Tsun
From qed-owner Wed Apr 21 12:13:08 1993
Received: by antares.mcs.anl.gov id AA18153
(5.65c/IDA-1.4.4 for qed-outgoing); Wed, 21 Apr 1993 11:07:37 -0500
Received: from cats.UCSC.EDU by antares.mcs.anl.gov with SMTP id AA18146
(5.65c/IDA-1.4.4 for ); Wed, 21 Apr 1993 11:07:34 -0500
Received: from si.UCSC.EDU by cats.UCSC.EDU with SMTP
id AA10649; Wed, 21 Apr 93 09:07:32 -0700
From: beeson@cats.UCSC.EDU
Received: by si.ucsc.edu (5.65/4.7) id AA08993; Wed, 21 Apr 93 09:07:26 -0700
Date: Wed, 21 Apr 93 09:07:26 -0700
Message-Id: <9304211607.AA08993@si.ucsc.edu>
To: chou@CS.UCLA.EDU, qed@mcs.anl.gov
Subject: Machine math, clarification
Sender: qed-owner
Precedence: bulk
Clarification inspired by Bob Boyer's remark: Machine Math MUST
be a formal language, with a precise syntax and semantics. However,
it must be UNLIKE currently known formal languages in that it must
NOT be repulsive to mathematicians. This must be accomplished by
making the syntax of Machine Math sufficiently similar to what
mathematicians currently write in journals.
I have a good deal of experience (seven years of programming)
building a system that incorporates precise rules for calculus, algebra,
and trigonometry, with the aim of making the computer user-friendly
to students of those subjects, and in particular, of automatically
producing step-by-step solutions as an A+ student should. I learned
that ordinary mathematics DOES have precise rules, even though those
rules require thousands of lines to express rather than dozens.
I believe the same goes for ordinary informal proofs. Machine
Math must solve the following proportion:
Machine Math Lisp or C
______________ _____________
"core logic" machine or assembly code
Some kind of "core logic" (a minimal logical apparatus) is inevitable,
for implementations, just as machine code can't be avoided if you want
to run algorithms.
From qed-owner Wed Apr 21 12:19:51 1993
Received: by antares.mcs.anl.gov id AA18401
(5.65c/IDA-1.4.4 for qed-outgoing); Wed, 21 Apr 1993 11:17:59 -0500
Received: from cats.UCSC.EDU by antares.mcs.anl.gov with SMTP id AA18393
(5.65c/IDA-1.4.4 for ); Wed, 21 Apr 1993 11:17:55 -0500
Received: from si.UCSC.EDU by cats.UCSC.EDU with SMTP
id AA11117; Wed, 21 Apr 93 09:17:53 -0700
From: beeson@cats.UCSC.EDU
Received: by si.ucsc.edu (5.65/4.7) id AA09447; Wed, 21 Apr 93 09:17:52 -0700
Date: Wed, 21 Apr 93 09:17:52 -0700
Message-Id: <9304211617.AA09447@si.ucsc.edu>
To: chou@CS.UCLA.EDU, qed@mcs.anl.gov
Subject: Re: Why should a mathematician be interested in QED?
Sender: qed-owner
Precedence: bulk
Can somebody fill us in on a related historical point? Namely,
WERE mathematicians interested in the Bourbaki project? and if so,
why?
From qed-owner Wed Apr 21 15:09:38 1993
Received: by antares.mcs.anl.gov id AA24204
(5.65c/IDA-1.4.4 for qed-outgoing); Wed, 21 Apr 1993 15:04:51 -0500
Received: from math.harvard.edu (tara.harvard.edu) by antares.mcs.anl.gov with SMTP id AA24197
(5.65c/IDA-1.4.4 for ); Wed, 21 Apr 1993 15:04:48 -0500
Date: Wed, 21 Apr 93 16:04:58 EDT
From: mumford@math.harvard.edu (david mumford)
Message-Id: <9304212004.AA05327@math.harvard.edu>
To: qed@mcs.anl.gov, beeson@cats.UCSC.EDU
Subject: Re: Why should a mathematician be interested in QED?
Sender: qed-owner
Precedence: bulk
Maybe I can pipe up once in this blizzard of comments: the great
attraction of Bourbaki for mathematicians was the promise that the
project would clarify the basic structures and theories used in
algebra, geometry and analysis. The hope was that a fairly small
simple set of basic structures were the basis of most research.
I think it is also true that for a while, this was generally believed.
But for the last 20 years, it has been increasingly doubted: first
because Bourbaki began to drown in its own need to be general enough,
and they never could be sure when to stop (e.g. before doing the
reals, they need a general theory of topological fields). Secondly,
because math began to be driven by complex theories rather than simple
ones: e.g. the Langlands conjecture at the abstract end, control theory
and probability at the applied end. These theories don't benefit much
from the Bourbaki program.
D.Mumford
From qed-owner Wed Apr 21 16:24:00 1993
Received: by antares.mcs.anl.gov id AA25274
(5.65c/IDA-1.4.4 for qed-outgoing); Wed, 21 Apr 1993 15:41:11 -0500
Received: from linus.mitre.org by antares.mcs.anl.gov with SMTP id AA25267
(5.65c/IDA-1.4.4 for ); Wed, 21 Apr 1993 15:41:09 -0500
Received: from malabar.mitre.org by linus.mitre.org (5.61/RCF-4S)
id AA20361; Wed, 21 Apr 93 16:41:05 -0400
Posted-Date: Wed, 21 Apr 93 16:41:02 -0400
Received: by malabar.mitre.org (5.61/RCF-4C)
id AA05383; Wed, 21 Apr 93 16:41:02 -0400
Date: Wed, 21 Apr 93 16:41:02 -0400
From: jt@linus.mitre.org
Message-Id: <9304212041.AA05383@malabar.mitre.org>
To: qed@mcs.anl.gov
Subject: bourbaki
Cc: jt@linus.mitre.org
Sender: qed-owner
Precedence: bulk
My own feeling is that the influence of the Bourbaki project -- in the
sense that is being discussed here, that is the elaboration of a
compendium of mathematics from a small number of first principles --
has been relatively insignificant. The "seminaire Bourbaki" on the
other hand is another matter.
I don't think most of the Bourbaki volumes are as widely read as one
would expect. Here are some examples:
It was commonly remarked by analysts that Bourbaki's treatment of
integration on arbitrary locally compact spaces (though quite nice in
many respects) was done "twice." (Basically, treatment number one is
too general, requiring treatment number two.) For this reason, analysts
usually preferred Rudin's treatment or even the treatment given in Monroe's
ancient text. This was particularly hard fact for francophiles like myself
to accept.
Much of the stuff that graduate students have to learn isn't in
Bourbaki. (The whole Hormander theory of linear partial differential
equations is not mentioned, group representations; In fact, you name
it, it probably isn't there.)
Some of the abstractions widely used by Bourbaki are not that
useful. For example, uniform spaces mentioned previously by John Harrison
is one such abstraction in my opinion. (Bourbaki's "Notes historiques"
attributes this concept to A. Weil.) Uniform spaces are a general framework
for talking about uniform continuity, uniform convergence, completeness etc.
However, most structures which are uniform spaces are usually something else
as well, and which is easier to deal with (e.g., a metric space, a topological
group etc.) Typically analysts prefer to treat structures in these other more
direct ways rather than treat them as instances of another abstraction. Some
abstractions are helpful, others aren't.
There are some nice Bourbaki chapters. The treatment of Lie algebras
is elegant; Chapter 9 of "Topologie generale", "Les applications des
nombres reels a la topologie" is another one which is very elegant.
I for one do not have a clear idea of the mathematics QED should
attempt to cover. "All of it" I think is utterly unrealistic. A
codification of the Bourbaki volumes is (a) much too restrictive and
(b) almost guaranteed not to be used by anybody. The mathematics QED
should attempt to cover, is I think, the fundamental issue we should
be discussing now. From a marketing point, this I think makes a lot of
sense. Using an automobile manufacturer's analogy, let's design and
produce a car for the market, not find a market for a car we designed
and produced without consideration of the market.
The choice of logic is an important issue, but should be decided by
the kind of mathematics which is going to be the main focus of QED.
Javier Thayer
From qed-owner Wed Apr 21 20:53:24 1993
Received: by antares.mcs.anl.gov id AA02627
(5.65c/IDA-1.4.4 for qed-outgoing); Wed, 21 Apr 1993 20:48:23 -0500
Received: from swan.cl.cam.ac.uk by antares.mcs.anl.gov with SMTP id AA02620
(5.65c/IDA-1.4.4 for ); Wed, 21 Apr 1993 20:48:18 -0500
Received: from auk.cl.cam.ac.uk (user jrh (rfc931)) by swan.cl.cam.ac.uk
with SMTP (PP-6.5) to cl; Thu, 22 Apr 1993 02:48:10 +0100
To: Lawrence C Paulson
Cc: qed@mcs.anl.gov
Subject: Re: little theories and encryption
In-Reply-To: Your message of Mon, 19 Apr 93 17:57:03 +0100. <"swan.cl.ca.644:19.04.93.16.57.14"@cl.cam.ac.uk>
Date: Thu, 22 Apr 93 02:48:05 +0100
From: John Harrison
Message-Id: <"swan.cl.cam.:133330:930422014815"@cl.cam.ac.uk>
Sender: qed-owner
Precedence: bulk
Larry Paulson writes:
[...] Perhaps 'little theories' could prevent this; each theorem would
be proved in the weakest framework possible for that particular
theorem.
I agree this is a very desirable goal. But finding such a framework is
of course a highly nontrivial creative process (unless you include
P |- P that is).
I would imagine that almost all algebraic structures arose from an
attempt to generalize existing proofs and identify precisely which
assumptions they rest on.
Which general structures are worth defining is something people will
disagree over (witness Javier's comments on uniform spaces).
Furthermore, given two proofs of a theorem from widely differing
assumptions, do we have any a priori reasons to expect there to be an
appropriate elegant and nontrivial abstraction (by this I mean to exclude
the conclusion itself or the disjunction of the two assumptions) that
unifies them?
Indeed, might not a proof from one set of assumptions be used as
a lemma in the proof from another set? (Perhaps in an apparently
"essential" way.) For example, Gauss' proof that if a ring R is a
unique factorization domain then so is its polynomial ring R[x] uses as
a lemma the fact that the ring of polynomials over a field is a UFD.
John Harrison (jrh@cl.cam.ac.uk)
From qed-owner Thu Apr 22 09:17:06 1993
Received: by antares.mcs.anl.gov id AA18466
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 08:55:47 -0500
Received: from life.ai.mit.edu by antares.mcs.anl.gov with SMTP id AA18458
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 08:55:44 -0500
Received: from spock (spock.ai.mit.edu) by life.ai.mit.edu (4.1/AI-4.10) id AA09122; Thu, 22 Apr 93 09:55:41 EDT
From: dam@ai.mit.edu (David McAllester)
Received: by spock (4.1/AI-4.10) id AA01527; Thu, 22 Apr 93 09:55:39 EDT
Date: Thu, 22 Apr 93 09:55:39 EDT
Message-Id: <9304221355.AA01527@spock>
To: mumford@math.harvard.edu
Cc: qed@mcs.anl.gov, beeson@cats.ucsc.edu
In-Reply-To: david mumford's message of Wed, 21 Apr 93 16:04:58 EDT <9304212004.AA05327@math.harvard.edu>
Subject: Why should a mathematician be interested in QED?
Sender: qed-owner
Precedence: bulk
I can't help responding to David Mumford's comments about Bourbaki.
The hope was that a fairly small
simple set of basic structures were the basis of most research.
This seems to express the hope that all mathematical concepts can be
made to be elaborations of a small set of very general basic concepts
(such as topology, algebra, and "structure").
These to me to be an orthogonal issue to the question of foundations.
A foundational system, such as ZFC, seems neutral on the question of
whether one should elaborate a small set of general difinitions or
make a much larger set of independent and much more concrete definitions
with abstraction coming later. A foundational system such as Ontic
can accomodate either style. The reals can be defined directly as
Dedikind cuts, or defined indirectly by adding axioms to the concept
of a topological field.
David
From qed-owner Thu Apr 22 09:41:12 1993
Received: by antares.mcs.anl.gov id AA19220
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 09:25:27 -0500
Received: from math.harvard.edu (tara.harvard.edu) by antares.mcs.anl.gov with SMTP id AA19212
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 09:25:24 -0500
Date: Thu, 22 Apr 93 10:24:27 EDT
From: mumford@math.harvard.edu (david mumford)
Message-Id: <9304221424.AA05826@math.harvard.edu>
To: dam@ai.mit.edu
Subject: Re: Why should a mathematician be interested in QED?
Cc: qed@mcs.anl.gov, beeson@cats.ucsc.edu
Sender: qed-owner
Precedence: bulk
Dear David (Mc),
The idea behind Bourbaki, viz.
"The hope was that a fairly small
simple set of basic structures were the basis of most research."
seems to me to have a lot to do with whether qed will be successful. If
something like this is not true, or if true, not heeded, one generates
an archipelago of little formal theories that only the true devotees will
understand: and no one will be able to add to the corpus without
knowing which fragments apply where. Of course ZFC or Ontic accomodates
both styles. The question is -- does human cognition accomodate both
equally well and does the style of professional interfacing between scientists
accomodate both syles? I would argue that qed must have a master plan
in the Bourbaki style.
David (Mu)
From Majordomo-Owner Thu Apr 22 09:42:27 1993
Received: by antares.mcs.anl.gov id AA19752
(5.65c/IDA-1.4.4 for qed-approval); Thu, 22 Apr 1993 09:42:25 -0500
Date: Thu, 22 Apr 1993 09:42:25 -0500
Message-Id: <199304221442.AA19752@antares.mcs.anl.gov>
To: qed-approval
From: Majordomo
Subject: SUBSCRIBE qed
Reply-To: Majordomo
felty@research.att.com (Amy Felty) has been added to qed.
No action is required on your part.
From cobb Thu Apr 22 09:44:50 1993
Received: from tango.mcs.anl.gov by antares.mcs.anl.gov with SMTP id AA19573
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 09:36:59 -0500
Received: by tango.mcs.anl.gov (NX5.67c/GCF-5.2)
id AA12784; Thu, 22 Apr 93 09:36:58 -0500
Date: Thu, 22 Apr 93 09:36:58 -0500
From: cobb
Message-Id: <9304221436.AA12784@tango.mcs.anl.gov>
Received: by NeXT.Mailer (1.87.1)
Received: by NeXT Mailer (1.87.1)
To: mcs
Subject: Seminar Reminder
Argonne National Laboratory
Mathematics and Computer Science Division
SEMINAR
New Methods for Ab Initio Calculations on Very Large Systems
Jan Almlof
University of Minnesota
Minneapolis, MN
10:30AM, Thursday, April 22, 1993
Building 221, Conference Room A216
Abstract
The talk will present several unconventional approaches that can be
used to avoid bottlenecks which are usually encountered in ab initio
quantum chemistry.
I will give a brief overview of the latest development in direct
techniques. These methods were introduced in Hartree-Fock
calculations some time ago, and are now being refined and extended to
encompass correlated electronic structure methods. By avoiding the
storage of integrals, it is obvious that disk size limitations are
completely circumvented. Less obvious but equally important is the
fact that in the direct scheme one can avoid the calculation and
processing of four-center, two-electron integrals altogether.
Techniques to exploit this feature will also be discussed in the
talk.
As the systems under study become larger, it is possible to introduce
very important simplifications due to their `semi-classical'
behaviour in the macroscopic limit, using techniques and concepts
familiar from density functional theory. I will discuss such
techniques, and try to predict how they will affect computational
chemistry in the future. Finally, I will also describe how the
development and application of these methods are being influenced by
recent progress in computer hardware, especially by the emerging
parallel computing technology.
*********************************************************************
Coffee and Donuts will be served in the interaction room prior to the
presentation.
From qed-owner Thu Apr 22 10:10:20 1993
Received: by antares.mcs.anl.gov id AA19885
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 09:47:42 -0500
Received: from life.ai.mit.edu by antares.mcs.anl.gov with SMTP id AA19877
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 09:47:39 -0500
Received: from spock (spock.ai.mit.edu) by life.ai.mit.edu (4.1/AI-4.10) id AA10856; Thu, 22 Apr 93 10:47:35 EDT
From: dam@ai.mit.edu (David McAllester)
Received: by spock (4.1/AI-4.10) id AA01531; Thu, 22 Apr 93 10:47:26 EDT
Date: Thu, 22 Apr 93 10:47:26 EDT
Message-Id: <9304221447.AA01531@spock>
To: mumford@math.harvard.edu
Cc: qed@mcs.anl.gov, beeson@cats.ucsc.edu
In-Reply-To: david mumford's message of Wed, 21 Apr 93 16:04:58 EDT <9304212004.AA05327@math.harvard.edu>
Subject: Why should a mathematician be interested in QED?
Sender: qed-owner
Precedence: bulk
What QED should strive for, I think, is a low cost way of reducing the
proofs of working mathematicians to complete formal rigor. One
benefit is increased confidence. Ultimately, the QED project could
also be used as a library in ways that current libraries can not. For
example, a mathematician could enter a definition and ask if this
concept has been studied before. If the system can "understand" the
definition, then it might be able to find appropriate references, or
even answer questions based on its preexisting knowledge of the
concept.
It seems to me that benefits for working mathematicians are not
achievable with current technology. With current systems a formally
verified proof is probably an order of magnitude more work than a
proof published without machine verification. I am quite confident
that this situation will change over time. More and more of the
formal details in proofs are being filled in automaticaly by existing
verification systems. I also think that one of the best ways to make
progress is to start the QED project. I suspect that in the beginning
the primary participants will be computer scientists interested in
imporving the usefulness of the system. The first mathematicians to
use it for their publishable mathematics will probably be some of
those same computer scientists.
David
From qed-owner Thu Apr 22 10:18:04 1993
Received: by antares.mcs.anl.gov id AA20091
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 09:55:40 -0500
Received: from swan.cl.cam.ac.uk by antares.mcs.anl.gov with SMTP id AA20071
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 09:55:16 -0500
Received: from auk.cl.cam.ac.uk (user jrh (rfc931)) by swan.cl.cam.ac.uk
with SMTP (PP-6.5) to cl; Thu, 22 Apr 1993 15:52:52 +0100
To: QED
Subject: Bourbaki's construction of the reals
In-Reply-To: Your message of Thu, 22 Apr 93 09:55:39 -0400. <9304221355.AA01527@spock>
Date: Thu, 22 Apr 93 15:52:37 +0100
From: John Harrison
Message-Id: <"swan.cl.cam.:047900:930422145418"@cl.cam.ac.uk>
Sender: qed-owner
Precedence: bulk
A peripheral point, in case the following remark by David McAllester
gives a false impression:
> The reals can be defined directly as Dedikind cuts, or defined
> indirectly by adding axioms to the concept of a topological field.
I should just emphasize that Bourbaki do not actually assert additional
axioms to those for a topological field. Rather they *construct* the
reals as the completion of the uniform space arising from the
topological group of rationals. (As David Mumford notes, a classic case
of "mathematics made difficult".)
Actually if I remember rightly (my copy of "General Topology" is at
home) that only accounts for the additive structure and they include
multiplication in some other way. There's a very beautiful treatment by
Behrend [%] which defines the (additive structure of the) positive reals
using a positional representation and recovers multiplication from the
automorphisms (i.e. identifies multiplication by x with the automorphism
which maps 1 |-> x). This allows a set of independent "axioms", as I
recall.
John Harrison (jrh@cl.cam.ac.uk).
[%] F.A. Behrend, A Contribution to the Theory of Magnitudes and the
Foundations of Analysis, Mathematische Zeitschrift, vol. 63, pp.
345-362, 1956.
From qed-owner Thu Apr 22 10:31:24 1993
Received: by antares.mcs.anl.gov id AA20809
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 10:20:20 -0500
Received: from dcs.ed.ac.uk (stroma.dcs.ed.ac.uk) by antares.mcs.anl.gov with SMTP id AA20727
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 10:17:22 -0500
Received: from whalsay.dcs.ed.ac.uk by dcs.ed.ac.uk id aa21649;
22 Apr 93 16:13 BST
Date: Thu, 22 Apr 93 16:13:21 BST
Message-Id: <11595.9304221513@whalsay.dcs.ed.ac.uk>
From: Randy Pollack
To: qed@mcs.anl.gov
Cc: rap@dcs.ed.ac.uk
Subject: Root Logics
Sender: qed-owner
Precedence: bulk
I want to say something about the Root Logic, but will motivate this
by replying to Michael Beeson's recent comment:
> I suggest the following organization of the QED effort:
> 1. Machine Math language design and specification.
> 2. Formalization of a sizeable body of mathematics in Machine Math.
> 3. Specification of low-order logic(s) as target language(s).
> 4. Construction of Machine Math compiler(s) into the target language(s).
> 5. Construction of proof-checkers for the target languages.
> 6. Construction of database software to manipulate a large body of
> mathematics in Machine Math form.
I take a different view: The Root Logic must be agreed upon first,
along with some notion of "modularization" for the Root Logic, such as
little theories or whatever. In fact, these are the only things that
have to be agreed upon.
Once the Root Logic is decided on, everyone can go and write proof
tools for whatever language and logic they like, using whatever proof
search technology they like, as long as the proofs can be translated
into the Root Logic and its theory mechanism. Everyone can use the
high-level language and logic of his/her choice, within the
restriction that its proofs can be translated to proofs in the Root
Logic with its theories. It is even possible that some existing proof
tools might be modified to translate their internal justifications
into explicit proofs in the Root Logic. Of course the correctness of
derivations in the Root Logic must be efficiently checkable without
any fancy proof search; all proof search is done by whatever front-end
is used. Thus I am assuming that the Root Logic has a reasonably
efficient notation for completely explicit proofs. (See my comment on
tactics below.)
This suggestion is not an argument against an agreed upon high-level
logic, but that is a much harder problem than to agree on a Root
Logic, and it is not necessary to solve this harder problem. Also,
different kinds of mathematics may be more amenable to different
specialized proof tools.
This brings up a point related to Javier's recent comment (these
messages are coming faster than I can write!):
The mathematics QED should attempt to cover, is I think, the
fundamental issue we should be discussing now. From a marketing point,
this I think makes a lot of sense. Using an automobile manufacturer's
analogy, let's design and produce a car for the market, not find a
market for a car we designed and produced without consideration of the
market.
Yes, we should be discussing what mathematics QED will cover, and we
may well design specialized proof tools for these various "markets" as
I've suggested above. However, we should not design the Root Logic,
in which all the different developments of parts of mathematics are
put together with some particular market in mind. The Root Logic is
the one thing that, once agreed upon, is very difficult to change. It
should be of sufficient expressiveness to handle unexpected problems.
On the Root Logic
Bob Boyer writes
The key point, I think that classicist David Hilbert would say, is
that WE CAN ALL AGREE to present and discuss essentially all important
logics and theories in a `finitistic' format, such as PRA admits.
David McAllester writes
Although I am sympathetic to the idea that we can all accept primitive
recursive arithmetic (PRA) I am little troubled by using it as the
foundation of QED. Consider the mean value theorem (MVT) of calculus.
I agree that this thoerem can be encoded as "P is a proof in ZFC of
ZFC-MVT" where ZFC-MVT is some formal statement of the mean value
theorem in the langauge of set theory. I am concerned about the
difficulty of using this entry in the QED library. It seems
"encrypted".
Larry Paulson writes
Mathematics done in simple base logics is not encrypted at all. Isabelle
supports the illusion that you are working directly in the formalized logic,
be it ZF, HOL, Constructive Type Theory or what have you. AUTOMATH type
theories also support a natural proof style.
This topic of representing logics in other logics is not (yet) a
cleaned-up and codified field of mathematics, so different words are
used for the same thing, and the same words for different things.
Let me distinguish three approaches for representing logics, although
they may be combined in different ways, and I'm probably leaving some
out altogether.
PRA
In PRA, we use the natural numbers, one canonical inductively defined
class, to encode PR definable classes of well-founded trees, e.g. the
syntax and derivation trees of some object logic. This is really
encryption, as McAllester said, and clearly depends on the power of
computation over the canonical class.
FS0
In Feferman's FS0, we have can directly construct classes of finitely
branching well-founded trees, and use these classes directly to
represent isomorphically trees of syntax and derivations of some
object logic. This is a lot less like encryption, a lot more natural
than in PRA. These two ways of representing a logic are similar in
that they employ the power of the meta-logic to represent and compute
over inductively defined classes; the terms and derivations of the
object logic are objects of the meta-logic. One might, for example,
quantify over derivations (well, not in PRA, but in L0), and do
induction over derivations to prove admissible rules of the object
logic. This is the point Feferman is making in [1] when he says
(sect. 21) "Examples of elementary meta-logical theorems about Prov(S)
which can be proved in FS0 ... are the Deduction Theorem and the
Finiteness Theorem."
Logical Frameworks
The third approach for representing logics is different, and can be
used in meta-logics with no inductively defined classes, and no power
to construct them. It is used in Automath, and more recently by
Martin-L\"of and also the Edinburgh Logical Framework (LF) [2]. The
idea is to use the inductive structure of the meta-theory itself to
represent the inductive structure of the object theory. I'm being
vague because I don't know how to explain this other than by example:
see [2],[3], or [4]. LF is a first-order logic; its consistency is
provable in PA, and it represents exactly the same functions as simply
typed lambda calculus (much less than PR), yet it can faithfully
represent a great many powerful object-logics. This method is also
used in some higher-order meta-logics. For example some versions of
Automath had some second-order power. Isabelle, as mentioned by Larry
Paulson in this mailing list, uses this method of representing logics.
When Larry says "Isabelle supports the illusion that you are working
directly in the formalized logic" he means using this form of
representation. Isabelle is higher-order (but without dependent
types), but for representing logics in this way it really needs little
more than first order. Lambda-prolog also uses this third approach.
(The question of how strong does the meta-logic have to be to use this
third approach has been somewhat quantified by Amy Felty in a series
of papers.)
Some Comparisons
This third approach sounds pretty good. Larry says he can "support
the illusion that you are working directly in the formalized logic",
and I've just made a point that it can be carried out in very weak
meta-logics. What's the problem? I see two major issues: how many
logics can be represented in this way, and how easy are the
representations to use. (Both of these points are hinted at by
Feferman in [1].)
How Many Object-Logics are Representable
The representations in this third style use the connectives of the
meta-logic to represent meta-connectives of the object logic. For
example, the implication of the meta-theory is used for "consequence"
(the horizontal line in rules) of the object logic (see [2]). Is it
possible to "faithfully and adequately" represent every object logic
in this way? (Clearly "every object logic" is too vague to be
meaningful.) This is addressed in [4] for LF, and is a technical
question; even the definition of "faithful and adequate" is not clear.
It is even more problematic for stronger meta-logics than LF; for
example if functions are definable by Primitive Recursion then the
function type A->B, which is used to code syntactic abstraction (i.e.
a B-thing with one A-thing hole) contains more than just syntactic
abstractions. The correctness of representations in the first two
approaches mentioned above is more straightfoward: one just builds
copies of the object-logic's informal term, formula and derivation
trees. Similarly, more object-logics are representable with the first
two approaches: we just build the derivation trees. Notice, however,
that for FS0 it is still possible to find naturally occurring formal
systems that are not representable: FS0 only represents finitely
branching trees, not the so-called "generalised inductive
definitions", including infinitely branching well-founded trees. (See
[5] for an example of a use of generalised inductive definitions in
formalizing everyday mathematics. By the way, does anyone know of a
philosophical school that accepts inductive definitions but rejects
generalized inductive definitions where the infinite branching is
given by a finite rule?)
How Easy is it to Use the Representations
In the third approach, the terms, formulas, and derivations of the
object logic are not objects of the meta-logic. For one thing, you
cannot do induction over them, so the meta-theory of the object logic
is not provable. (How much meta-theory you can do may depend on how
much extra power the meta-logic has. In Isabelle's representation of
first-order logic, is the Deduction Theorem provable? It is certainly
not provable in the LF presentation of FOL.) In such a representation
the Deduction Theorem is only representable as a tactic; i.e a program
in some meta-language of the framework that actually transforms
represented derivations of A |- B (these are terms of the framework)
into derivations of |- A->B (also terms of the framework).
Correctness of this tactic cannot even be stated in the framework.
(Of course, we don't have to state or prove correctness of a tactic to
use it, but actually running tactics blows up the size of proof
trees!) In FS0, the Deduction theorem for a representation of FOL is
just a theorem of FS0. This is how mathematics is actually done!
Which Root Logic?
It may sound like I am putting foward FS0, but I am only using FS0 as
an example of a certain style. I am arguing against the
LF/Isabelle/Automath style of representation. I am arguing for a Root
Logic that can express inductively defined classes. How strong is the
Root Logic? How complex are the inductively defined classes it can
express? I have argued that FS0 is slightly deficient in only having
finitely branching trees. (Feferman suggests this can be easily
fixed.) I only happen to know these are very useful from experience
[5]. The QED project needs to collect this kind of experience in
order to make reasonable decisions on the Root Logic.
Another weak but expressive logic, very different in character from
FS0 is the Martin-L\"of logical framework, presented somewhat
informally in [6]. It has generalized inductive definitions, and is
acceptable to most constructivists.
[1] "Finitary inductively presented logics", S. Feferman, Logic
Colloquium '88, Ferro, Bonotto, Valenti and Zanardo (eds.), Elsevier,
North-Holland, 1989., pp. 191-220.
[2] "A Framework for Defining Logics", Harper, Honsell and Plotkin,
JACM, Jan. 1993.
[3] "Using Typed Lambda Calculus to Implement Formal Systems on a
Machine", Avron, Honsell, Mason and Pollack, JAR 9:309-354, 1992.
[4] "Representing Logics in Type Theory" Philippa Gardner, PhD thesis,
Computer Science Dept., University of Edinburgh, July 1992.
[5] "Pure Type Systems Formalized", McKinna and Pollack, TLCA'93
(Utrecht), LNCS 664, March 1993. Also available by anonymous ftp from
ftp.ed.ac.uk; cd to export/lego and get README.
[6] "Programming in Martin-L\"of's Type Theory", Nordstr\"om,
Petersson and Smith, Oxford University Press, 1990.
From qed-owner Thu Apr 22 11:39:56 1993
Received: by antares.mcs.anl.gov id AA22633
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 11:25:25 -0500
Received: from cats.UCSC.EDU by antares.mcs.anl.gov with SMTP id AA22621
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 11:25:04 -0500
Received: from si.UCSC.EDU by cats.UCSC.EDU with SMTP
id AA05070; Thu, 22 Apr 93 09:24:30 -0700
From: beeson@cats.UCSC.EDU
Received: by si.ucsc.edu (5.65/4.7) id AA12041; Thu, 22 Apr 93 09:24:20 -0700
Date: Thu, 22 Apr 93 09:24:20 -0700
Message-Id: <9304221624.AA12041@si.ucsc.edu>
To: qed@mcs.anl.gov, rap@dcs.ed.ac.uk
Subject: Re: Root Logics
Sender: qed-owner
Precedence: bulk
In order to decide on the organization of QED effort, e.g. on the
question of whether it is more important to agree on a high-level
language or a root logic, it will be necessary first to agree on the
aim(s) of the project, in order to judge the efficacy of various means
in attaining the desired aim(s).
>From my point of view, the aim is to provide
(1) the means for practicing mathematicians to write machine-checkable
proofs
(2) the means to check those proofs once written
(3) the incentive to write such proofs.
I think it is (3) that is most problematic. People were willing to
learn TeX, because the reward is immediate. The incentive to write
machine-checkable proofs consists, potentially, in two features:
avoidance of errors, and adherence to a community standard of proof.
The second of these is a long way off. The first will not be enough
incentive for practicing mathematicians. Therefore at first most
of the authors of such proofs will be graduate students, who have been
asked by their professors or have taken the initiative themselves to
write machine-checkable proofs of certain theorems. The proofs they
will write will be in a high-level language close to mathematical
practice (unless they are CS grad students whose advisor is directing
a proof-checking research project).
Do other people have widely differing views of the aim of QED?
Are there mistakes in the above analysis of incentives?
From qed-owner Thu Apr 22 11:40:44 1993
Received: by antares.mcs.anl.gov id AA22804
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 11:33:00 -0500
Received: from tuminfo2.informatik.tu-muenchen.de by antares.mcs.anl.gov with SMTP id AA22794
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 11:32:49 -0500
Received: from sunbroy14.informatik.tu-muenchen.de ([131.159.0.114])
by tuminfo2.informatik.tu-muenchen.de
with SMTP id <57659>; Thu, 22 Apr 1993 18:32:37 +0200
Received: by sunbroy14.informatik.tu-muenchen.de id <8079>; Thu, 22 Apr 1993 18:32:18 +0200
From: Tobias.Nipkow@informatik.tu-muenchen.de
To: qed@mcs.anl.gov
In-Reply-To: Randy Pollack's message of Thu, 22 Apr 1993 17:13:21 +0200 <11595.9304221513@whalsay.dcs.ed.ac.uk>
Subject: Root Logics
Message-Id: <93Apr22.183218met_dst.8079@sunbroy14.informatik.tu-muenchen.de>
Date: Thu, 22 Apr 1993 18:32:07 +0200
Sender: qed-owner
Precedence: bulk
Let me comment on Randy's distinction between Logical Frameworks and PRA/FS0
as a Root Logic because I think it is a red herring. Logical Frameworks are
intensionally weak logics (no, you cannot prove the Deduction Theorem in
Isabelle's representation of first-order logic). You are not meant to reason
in them. They merely provide the infrastructure (eg binding mechanisms) for
implementing other logics. If you want to do meta-theory, you first define
some appropriate Root Logic RL, eg FS0, and then define your actual logic of
interest within RL.
So the question is: should a project like QED implement the Root Logic
directly or should it use Logical Frameworks technology? Given the scale of
the envisaged effort, QED would probably need to be built from scratch,
probably using ideas from Logical Frameworks. More modest systems like HOL
can and have been implemented in Isabelle with only a small loss in
functionality and efficiency.
Of course none of this answers the real question as to what the Root Logic
should be.
Tobias
From qed-owner Thu Apr 22 12:14:44 1993
Received: by antares.mcs.anl.gov id AA23647
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 12:08:21 -0500
Received: from sun2.nsfnet-relay.ac.uk by antares.mcs.anl.gov with SMTP id AA23640
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 12:08:19 -0500
Via: uk.ac.edinburgh.aifh; Thu, 22 Apr 1993 15:54:00 +0100
Received: from etive.aisb.ed.ac.uk by aisb.ed.ac.uk; Thu, 22 Apr 93 09:54:11 BST
Date: Thu, 22 Apr 93 09:54:10 BST
Message-Id: <27363.9304220854@etive.aisb.ed.ac.uk>
From: Alan Bundy
Subject: Re: Machine math, clarification
To: beeson@cats.ucsc.edu
Phone: 44-31-650-2716
Fax: 44-31-650-6516
Cc: qed@mcs.anl.gov
Sender: qed-owner
Precedence: bulk
> Clarification inspired by Bob Boyer's remark: Machine Math MUST
> be a formal language, with a precise syntax and semantics. However,
> it must be UNLIKE currently known formal languages in that it must
> NOT be repulsive to mathematicians. This must be accomplished by
> making the syntax of Machine Math sufficiently similar to what
> mathematicians currently write in journals.
I agree with the above remarks, but note that if taken
literally the QED system would have to cope with remarks like
"similarly", "proof trivial", "by symmetry", etc. Ultimately, it
might be possible to cope with some proof descriptions containing
such remarks by building tactics that can try to reproduce the
proofs they imply. However, it will certainly be a non-trivial
task.
Alan Bundy
From qed-owner Thu Apr 22 19:50:39 1993
Received: by antares.mcs.anl.gov id AA03557
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 19:02:50 -0500
Received: from cli.com by antares.mcs.anl.gov with SMTP id AA03548
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 19:02:46 -0500
Received: by CLI.COM (4.1/1); Thu, 22 Apr 93 18:57:02 CDT
Date: Thu, 22 Apr 93 19:02:28 CDT
From: Robert S. Boyer
Message-Id: <9304230002.AA19396@axiom.CLI.COM>
Received: by axiom.CLI.COM (4.1/CLI-1.2)
id AA19396; Thu, 22 Apr 93 19:02:28 CDT
To: qed@mcs.anl.gov
Subject: Red Herring?
Reply-To: boyer@CLI.COM
Sender: qed-owner
Precedence: bulk
I suspect that Tobias.Nipkow@informatik.tu-muenchen.de may be
PRACTICALLY wrong when he says
Let me comment on Randy's distinction between Logical Frameworks and PRA/FS0
as a Root Logic because I think it is a red herring. Logical Frameworks are
intensionally weak logics (no, you cannot prove the Deduction Theorem in
Isabelle's representation of first-order logic). You are not meant to reason
in them. They merely provide the infrastructure (eg binding mechanisms) for
implementing other logics. If you want to do meta-theory, you first define
some appropriate Root Logic RL, eg FS0, and then define your actual logic of
interest within RL.
My suspicion is grounded in the following belief. I believe that it
will be necessary, practically speaking, i.e., from the point of view
of both cpu time and disk space, to be able to prove the correctness
of algorithms such as simplifiers and decision procedures in the root
logic and then to `reflect' those proved algorithms, i.e., having
proved them correct to `run' them as very efficiently compiled
programs and to trust the results thereby computed precisely as much
as if a full Hilbert-Goedel or whatever style proof object had been
constructed and checked. I say believe. I have no proof of this
belief. But please grant me this belief for the moment.
If the verification of algorithms is not possible in the root logic
for want of inductive or other logical power, then I don't see how
(what a puny claim) reflection could be usefully employed in practice.
It wouldn't do one any good that I can see to prove the correctness of
some algorithm in some random theory one layer removed because the
right to reflect algorithms cannot in general be granted for just any
logic represented in the root logic, as far as I can see. (I admit
that my vision becomes myopic when I start to think about reflection.
It is still terribly worrisome to me that a reflection axiom can
sometimes strengthen a logic by the addition of its consistency.)
What Tobias claims seems perfectly true to me in principle, just not
in practice. Maybe God has enough time and disk space to check his
mathematics that way, but I believe we humans will have to boot-strap
our way up, USING some of the mathematics we have PROVED within the
qed system, to the maximum extent logically imaginable, and that
includes verifying code, running it, and trusting the results.
Bob
P. S. I want to agree with Beeson, Mumford, Thayer and others who
bring our sharp attention to the premier questions of which parts of
mathematics ought to be treated and how that mathematics should be
organized and presented. But granting the priority of those questions
to the overall success of the project, and while we think about these
most important questions, there is still perhaps some profit in also
discussing the framing and plumbing of the system, which maybe no
ordinary classical mathematician will ever want or need to see.
From qed-owner Thu Apr 22 21:01:13 1993
Received: by antares.mcs.anl.gov id AA04531
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 20:05:44 -0500
Received: from tuminfo2.informatik.tu-muenchen.de by antares.mcs.anl.gov with SMTP id AA04520
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 20:05:36 -0500
Received: from sunbroy14.informatik.tu-muenchen.de ([131.159.0.114])
by tuminfo2.informatik.tu-muenchen.de
with SMTP id <57718>; Fri, 23 Apr 1993 03:05:26 +0200
Received: by sunbroy14.informatik.tu-muenchen.de id <8079>; Fri, 23 Apr 1993 03:04:59 +0200
From: Konrad Slind
To: qed@mcs.anl.gov
Subject: Cold water
Message-Id: <93Apr23.030459met_dst.8079@sunbroy14.informatik.tu-muenchen.de>
Date: Fri, 23 Apr 1993 03:04:55 +0200
Sender: qed-owner
Precedence: bulk
As someone who has been impressed by the progress in the verification of
hardware and software over the past few years, I think that QED (as
manifesto'ed) is a path best untaken. Specifications of computer systems
often draw very little from conventional mathematical knowledge of the
sort that QED is aimed at codifying. Formalized conventional mathematics
is a source to draw from, not a goal. How reasonable is a massive
project that
a) may or may not serve the needs of mathematicians, who so far seem
immune to the attractions of actual formalized proofs and
b) will distract from, if not impede (by diversion of talent),
progress in verification, which is demonstrably of use to
computer manufacturers and software writers?
To phrase things bluntly, which group - mathematicians or computer
companies - have more money? And what are their needs?
Having divested myself of these opinions, I would like to note that I
would dearly love to have some sort of "community bin" of math and
computer theories. I just think that it should be a byproduct, not a
holy grail.
Konrad.
From qed-owner Thu Apr 22 21:17:57 1993
Received: by antares.mcs.anl.gov id AA04822
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 22 Apr 1993 20:23:22 -0500
Received: from tuminfo2.informatik.tu-muenchen.de by antares.mcs.anl.gov with SMTP id AA04814
(5.65c/IDA-1.4.4 for ); Thu, 22 Apr 1993 20:23:16 -0500
Received: from sunbroy14.informatik.tu-muenchen.de ([131.159.0.114])
by tuminfo2.informatik.tu-muenchen.de
with SMTP id <57718>; Fri, 23 Apr 1993 03:23:07 +0200
Received: by sunbroy14.informatik.tu-muenchen.de id <8079>; Fri, 23 Apr 1993 03:22:48 +0200
From: Konrad Slind
To: boyer@CLI.COM
Cc: qed@mcs.anl.gov
In-Reply-To: 's message of Fri, 23 Apr 1993 02:02:28 +0200 <9304230002.AA19396@axiom.CLI.COM>
Subject: Red Herring?
Message-Id: <93Apr23.032248met_dst.8079@sunbroy14.informatik.tu-muenchen.de>
Date: Fri, 23 Apr 1993 03:22:39 +0200
Sender: qed-owner
Precedence: bulk
Just a side comment on reflection: one has to distinguish "logical"
reflection from "computational" reflection. The examples that Robert S.
Boyer gives (simplifiers and decision procedures) are of algorithms
whose acceptance (after correctness has been proven) as new inference
rules in no way increase the power of the logic. This is computational
reflection.
> (I admit that my vision becomes myopic when I start to think about
> reflection. It is still terribly worrisome to me that a reflection axiom
> can sometimes strengthen a logic by the addition of its consistency.)
This is logical reflection. I think it would be hard to justify the
restriction to a simple logic on one hand and the addition of such a
powerful axiom on the other hand.
Of course Boyer already knows this, as demonstrated in his and Moore's
"Metafunctions" paper.
Konrad.
From qed-owner Fri Apr 23 12:01:06 1993
Received: by antares.mcs.anl.gov id AA21561
(5.65c/IDA-1.4.4 for qed-outgoing); Fri, 23 Apr 1993 11:12:15 -0500
Received: from netcom4.netcom.com by antares.mcs.anl.gov with SMTP id AA21548
(5.65c/IDA-1.4.4 for ); Fri, 23 Apr 1993 11:12:12 -0500
Received: by netcom4.netcom.com (5.65/SMI-4.1/Netcom)
id AA06019; Fri, 23 Apr 93 09:12:25 -0700
Date: Fri, 23 Apr 93 09:12:25 -0700
From: val@netcom.com (Dewey Val Schorre)
Message-Id: <9304231612.AA06019@netcom4.netcom.com>
To: qed@mcs.anl.gov
Subject: Theorem generators and Theory Generators
Sender: qed-owner
Precedence: bulk
Once QED gets going, we will need not only proof checkers and theorem
provers, but also "theorem generators" and "theory generators". A
theorem generator will work with an existing theory of QED and
generate interesting theorems that are not yet present. A theory
generator will scan the QED data base and generate "little theories"
that could have been used for proofs in existing theories.
Notice the three levels of programs:
Proof Checkers -- Always succeeds
Theorem Provers -- May loop forever but the criterion
for success is objective
Theorem and Theory Generators -- May loop forever and the
criterion
for success can only be subjective
This leads to a new form of Turing's test for artificial intelligence.
When a paper is submitted to a journal, was it written by a human
mathematician or by one of QED's theorem or theory generators?
Val
From qed-owner Fri Apr 23 12:01:32 1993
Received: by antares.mcs.anl.gov id AA21535
(5.65c/IDA-1.4.4 for qed-outgoing); Fri, 23 Apr 1993 11:11:36 -0500
Received: from netcom4.netcom.com by antares.mcs.anl.gov with SMTP id AA21527
(5.65c/IDA-1.4.4 for ); Fri, 23 Apr 1993 11:11:33 -0500
Received: by netcom4.netcom.com (5.65/SMI-4.1/Netcom)
id AA05976; Fri, 23 Apr 93 09:11:46 -0700
Date: Fri, 23 Apr 93 09:11:46 -0700
From: val@netcom.com (Dewey Val Schorre)
Message-Id: <9304231611.AA05976@netcom4.netcom.com>
To: qed@mcs.anl.gov
Subject: Re:Machine Math
Sender: qed-owner
Precedence: bulk
Back in the old days, nobody specified languages, they wrote
compilers. It had to be that way because nobody knew whether the
language constructs contained all the information needed to compile
into machine language, unless they actually had a compiler that did
it.
We are in this same situation today. We might design a RMachine MathS
language that was perfectly acceptable to mathematicians, and later
find that it lacked information needed for translation to a lower
level.
The design of the language that mathematicians will use, and its
translation to the lower level language needs to be an iterative
process.
Val
From qed-owner Fri Apr 23 17:49:59 1993
Received: by antares.mcs.anl.gov id AA00247
(5.65c/IDA-1.4.4 for qed-outgoing); Fri, 23 Apr 1993 17:47:50 -0500
Received: from tuminfo2.informatik.tu-muenchen.de by antares.mcs.anl.gov with SMTP id AA00238
(5.65c/IDA-1.4.4 for ); Fri, 23 Apr 1993 17:47:40 -0500
Received: from sunbroy14.informatik.tu-muenchen.de ([131.159.0.114])
by tuminfo2.informatik.tu-muenchen.de
with SMTP id <57824>; Sat, 24 Apr 1993 00:47:28 +0200
Received: by sunbroy14.informatik.tu-muenchen.de id <8079>; Sat, 24 Apr 1993 00:47:20 +0200
From: Konrad Slind
To: boyer@CLI.COM
Cc: qed@mcs.anl.gov
In-Reply-To: 's message of Fri, 23 Apr 1993 05:35:40 +0200 <9304230335.AA19436@axiom.CLI.COM>
Subject: Why Mathematical System Modeling Needs the Mathematics of Analysis
Message-Id: <93Apr24.004720met_dst.8079@sunbroy14.informatik.tu-muenchen.de>
Date: Sat, 24 Apr 1993 00:47:11 +0200
Sender: qed-owner
Precedence: bulk
Robert S. Boyer writes
> Currently software and hardware proving is very introspective,
> focusing upon compilers, operating systems, multipliers and such. We
> rarely more than dream about mechanically checking theorems about
> applications that interact with the continuous world, the very world
> that was the inspiration for analysis, the world in which safety
> critical systems are at risk of doing their damage. I believe that we
> don't yet study such applications because it seems so hard, perhaps
> impossibly hard to do all by oneself from scratch.
I don't agree. One reason why we have not got on to verifications of
continuous/discrete systems is that the verification of devices having
arithmetic or even purely logical specifications is still quite
fruitful, in terms of successes and of research interest. Of course,
verifications utilizing more complex models are important challenges
still confronting the field (see below).
> I believe it would immensely help the prospects for beginning to treat
> the mathematics of computer systems that interact with the continuous
> world if there existed a rigorous, substantial body of mechanically
> checked formal definitions and theorems about integrals, partial
> differential equations, and the rest of mathematics used by engineers.
There are already significant theories of the real numbers in, at least,
the HOL, IMPS, and Nuprl systems. Of course, these are but a fraction of
the massive amount of analysis that has been done on paper and in
computer algebra systems, but we certainly don't need to wait for QED to
come to fruition in order to start verifying systems specified in terms
of the reals. As a byproduct of such verifications, these theories can
only become deepened and extended, thus acheiving many of the goals of
QED in a relatively quiet and simple manner.
Konrad.
From qed-owner Fri Apr 23 20:11:33 1993
Received: by antares.mcs.anl.gov id AA02424
(5.65c/IDA-1.4.4 for qed-outgoing); Fri, 23 Apr 1993 19:59:34 -0500
Received: from tuminfo2.informatik.tu-muenchen.de by antares.mcs.anl.gov with SMTP id AA02415
(5.65c/IDA-1.4.4 for ); Fri, 23 Apr 1993 19:59:21 -0500
Received: from sunbroy14.informatik.tu-muenchen.de ([131.159.0.114])
by tuminfo2.informatik.tu-muenchen.de
with SMTP id <57668>; Sat, 24 Apr 1993 02:59:11 +0200
Received: by sunbroy14.informatik.tu-muenchen.de id <8079>; Sat, 24 Apr 1993 02:58:49 +0200
From: Konrad Slind
To: qed@mcs.anl.gov
Subject: structure of qed
Message-Id: <93Apr24.025849met_dst.8079@sunbroy14.informatik.tu-muenchen.de>
Date: Sat, 24 Apr 1993 02:58:36 +0200
Sender: qed-owner
Precedence: bulk
The QED manifesto does make a valid point about the atrocious state of
sharing information between theorem proving systems. Consider this as a
network topology problem. The QED proposal amounts to a star network,
with the root logic in the middle. This requires agreeing on a root
logic. What about having a ring network? Then there's no need for a root
logic; all a theorem proving system needs to do to join the network is
to be able to handle the "packets" (formulas and proofs) of one member
of the network and ensure that its own output packets are digestible by
its downstream neighbour.
Konrad.
From qed-owner Sat Apr 24 12:08:00 1993
Received: by antares.mcs.anl.gov id AA15306
(5.65c/IDA-1.4.4 for qed-outgoing); Sat, 24 Apr 1993 10:06:22 -0500
Received: from swan.cl.cam.ac.uk by antares.mcs.anl.gov with SMTP id AA15298
(5.65c/IDA-1.4.4 for ); Sat, 24 Apr 1993 10:06:17 -0500
Received: from auk.cl.cam.ac.uk (user jrh (rfc931)) by swan.cl.cam.ac.uk
with SMTP (PP-6.5) to cl; Sat, 24 Apr 1993 16:06:11 +0100
To: QED
Subject: Incentive to write proofs
In-Reply-To: Your message of Thu, 22 Apr 93 09:24:20 -0700. <9304221624.AA12041@si.ucsc.edu>
Date: Sat, 24 Apr 93 16:06:06 +0100
From: John Harrison
Message-Id: <"swan.cl.cam.:223760:930424150614"@cl.cam.ac.uk>
Sender: qed-owner
Precedence: bulk
Michael Beeson writes:
> From my point of view, the aim is to provide
> (1) the means for practicing mathematicians to write machine-checkable
> proofs
> (2) the means to check those proofs once written
> (3) the incentive to write such proofs.
>
> I think it is (3) that is most problematic. [...]
Perhaps not. Speaking from personal experience, I think there is great
*educational* value in doing proofs in a form acceptable to a computer
theorem prover:
* Since you are forced to express yourself precisely, your own thoughts are
often clarified.
* You appreciate the need for subtle points in proofs, which all but the
most conscientious would skim over in a textbook (even supposing they
appear in the textbook!) Such niggly details are particularly common in
model theory and proof theory.
* Since it is (and is likely to continue to be in the foreseeable future)
much more laborious than doing proofs by hand, you are more inclined to
husband your resources. For example, you might make a critical comparison
of several available proofs before starting to type, and this is
interesting and valuable. Maybe you would even think up interesting new
mathematical ideas to simplify the proofs.
Doubtless the educative value would be much less for a professional
mathematician, but I believe many students would find it both instructive
and enjoyable to do proofs by machine. You could even set competitions to
get them to find the most `elegant' way of doing things.
I think it is definitely the choice of logical basis that will be most
problematic. Indeed, I believe a consensus is almost impossible, and in
the end some cadre will have to bulldozer opposition and go its own way.
John Harrison (jrh@cl.cam.ac.uk).
From qed-owner Sat Apr 24 12:08:38 1993
Received: by antares.mcs.anl.gov id AA15814
(5.65c/IDA-1.4.4 for qed-outgoing); Sat, 24 Apr 1993 10:38:51 -0500
Received: from bos1c.delphi.com (delphi.com) by antares.mcs.anl.gov with SMTP id AA15806
(5.65c/IDA-1.4.4 for ); Sat, 24 Apr 1993 10:38:48 -0500
Received: from bix.com by delphi.com (PMDF #3207 ) id
<01GXDZBM70PS8ZDV7M@delphi.com>; Sat, 24 Apr 1993 11:37:56 EDT
Date: 24 Apr 1993 11:24:52 -0400 (EDT)
From: mthayer@BIX.com
Subject: Other considerations
To: qed@mcs.anl.gov
Message-Id: <9304241124.memo.2814@BIX.com> Sat, 24 Apr 1993 11:24:52 -0400 (EDT)
Content-Transfer-Encoding: 7BIT
X-Cosy-To: qed@mcs.anl.gov
Sender: qed-owner
Precedence: bulk
Allow me to add yet another perspective on the issues that
need to be addressed by the QED family. I <>
feel that the foundationalist goals so far expressed are
somewhat unrealistic either now or in the future: what
counts as rigor or "firm foundation" has changed in the past,
and will do so again, undoubtedly because "God has more disk
space" than we do. While I think that these millenialist
aspirations may make a good research program, I prefer a
different avenue, but one at least mentioned previously:
what can we get people interested in?
It may be that our best approach <> to preach a Great
Crusade in search of ultimate surety and foundation, but a
More Modest Proposal might bring out some interesting issues
as well: Let us merely seek to lay bare what methods and
practices generally accepted within a particular field allow
in a completely formal way: i.e. we pick some area where
there is broad acceptance of methods (both inferential and
postulational) and, in the spirit of a suggestion of Victor
Yodaiken, use the language standard of that area (he suggested
Serge Lang's text). If we follow this approach, we need to
select some area where there might be significant agreement
among mathematicians as to the utility of such a program:
I suggest the verification of the simple group classification.
The methods are for the most part clearly agreed upon, it is
the details which are in doubt.
If you are still with me, then you will see a problem which
also faces the Grand Crusade as well as the More Modest
Proposal: how do we input the massive ammount of material?
Clearly getting machine readable versions of all the relevant
materials is the least o our worries: translation into the
chosen base logic is an enormous task, but one which is not
without forerunners in other areas. What we need is some
sort of computer-aided or, more unlikely, completely
automated, Hermeneutical Daemon to use in this project.
This Daemon is more closely related to work in the humanities
on textual analysis than to Automated Reasoning Projects,
and might require yet another group of interested
participants. In short, the More Modest Proposal may lead
to a slightly different Crusade. It does, however, point
up another area of consideration for the QED group.
Michael Thayer (donning his asbestos suit)
From qed-owner Sat Apr 24 21:15:39 1993
Received: by antares.mcs.anl.gov id AA25298
(5.65c/IDA-1.4.4 for qed-outgoing); Sat, 24 Apr 1993 21:03:28 -0500
Received: from linus.mitre.org by antares.mcs.anl.gov with SMTP id AA25289
(5.65c/IDA-1.4.4 for ); Sat, 24 Apr 1993 21:03:25 -0500
Received: from circe.mitre.org by linus.mitre.org (5.61/RCF-4S)
id AA19022; Sat, 24 Apr 93 22:03:23 -0400
Posted-Date: Sat, 24 Apr 93 22:02:50 -0400
Received: by circe.mitre.org (5.61/RCF-4C)
id AA09730; Sat, 24 Apr 93 22:02:51 -0400
Message-Id: <9304250202.AA09730@circe.mitre.org>
To: QED
Subject: Re: Incentive to write proofs
In-Reply-To: Your message of "Sat, 24 Apr 93 16:06:06 BST."
<"swan.cl.cam.:223760:930424150614"@cl.cam.ac.uk>
X-Postal-Address: MITRE, Mail Stop A156 \\ 202 Burlington Rd. \\ Bedford, MA 01730
X-Telephone-Number: 617 271 2654; Fax 617 271 3816
Date: Sat, 24 Apr 93 22:02:50 -0400
From: guttman@linus.mitre.org
Sender: qed-owner
Precedence: bulk
I like John Harrison's point about the educational value of proofs a
lot.
> Speaking from personal experience, I think there is great
> *educational* value in doing proofs in a form acceptable to a computer
> theorem prover...
I think that (when we manage to do it right) one of the biggest
pay-offs of computerized proofs will be to help students to get a
deeper understanding of mathematics. I don't mean a deeper
understanding of logic and of foundations, but of ordinary
mathematics.
Experience working with a good computer-based proof system will teach
which theorems are needed as lemmas for which others; it will teach
what role the lemmas really play in the proofs; and it will teach
which kinds of manipulations (computations) are valid in which kinds
of structures.
Not many students currently get a grasp at this level. But I think it
will be easier to learn with a computerized mathematical laboratory.
Because then you can experiment and see what kinds of reasoning are
successful and where you find insurmountable gaps.
That's what I'd like to see come out of a qed-like enterprise. The
foundational content of qed seems to me less important. I even think
that as the quality of the computerized mathematical laboratory
gradually increases, that it will also provide a practical advantage
for the professional mathematician.
I think I disagree with John though when he says:
> I think it is definitely the choice of logical basis that will be most
> problematic.
Maybe he mean problematic in the sense of empirically difficult to get
people to agree, and if that's it then maybe he's right. But if he
means that it's hard to choose the right one and disastrous to choose
the wrong one, then I disagree.
I think that we know many logics that are expressive, and easy to
reason with, and well-understood. I think we'll be much better off if
we settle in with one (or a few, arranged in Konrad's ring network
design, but please, not too many), and start figuring out how to carry
out interesting pieces of mathematics.
Josh
From jt@linus.mitre.org Sat Apr 24 23:49:26 1993
Received: from linus.mitre.org by antares.mcs.anl.gov with SMTP id AA27770
(5.65c/IDA-1.4.4 for ); Sat, 24 Apr 1993 23:49:24 -0500
Return-Path:
Received: from malabar.mitre.org by linus.mitre.org (5.61/RCF-4S)
id AA20531; Sun, 25 Apr 93 00:49:21 -0400
Posted-Date: Sun, 25 Apr 93 00:49:19 -0400
Received: by malabar.mitre.org (5.61/RCF-4C)
id AA06743; Sun, 25 Apr 93 00:49:19 -0400
Date: Sun, 25 Apr 93 00:49:19 -0400
From: jt@linus.mitre.org
Message-Id: <9304250449.AA06743@malabar.mitre.org>
To: qed-owner@mcs.anl.gov
Subject: Boyer's hypothesis
Cc: jt@linus.mitre.org
One assumption behind the QED project is what I will call the BOYER
HYPOTHESIS. My reason for given it this name is the following: Bob
Boyer has been polling various people involved in automated theorem
proving for some time now, and put this idea forward based on the
evidence provided to him by his informants.
BOYER's HYPOTHESIS
For any theorem the length l_c of its computer proof is not significantly
larger than the length l_i of its rigourous (but possibly informal)
proof.
Obviously there are a lot of vagaries in the above formulation of the
Boyer Hypothesis, but current evidence seems to suggest that the
statement does not depend too strongly on how the various quantities
are measured. Moreover, current evidence also suggests the Boyer
constant l_c/l_i is < 100.
I would like to propose that we ask ourselves how likely is it that
the Boyer constant is of this order of magnitude for current research
mathematics (current, means mathematics done within the last 30
years). I think this is important, because I feel that what has been
done with automated tools is still an insignificant part of
mathematics and includes virtually nothing from algebraic topology,
partial differential equations, algebraic number theory, etc..
If it turns out that the Boyer constant for proofs in these areas is
10000, then much of what we are talking about in this forum about
"doing real mathematics" is highly speculative and may border on
self-deception.
Unfortunately, I do not have a clue how one would answer this question
in a convincing way other than by experimentation -- which seems to go
against the aprioristic program suggested earlier in this discussion
by several contributors.
Javier Thayer
From qed-owner Sun Apr 25 14:20:21 1993
Received: by antares.mcs.anl.gov id AA07042
(5.65c/IDA-1.4.4 for qed-outgoing); Sun, 25 Apr 1993 14:07:08 -0500
Received: from Maui.CS.UCLA.EDU by antares.mcs.anl.gov with SMTP id AA07035
(5.65c/IDA-1.4.4 for ); Sun, 25 Apr 1993 14:07:05 -0500
Received: from LocalHost.cs.ucla.edu by maui.cs.ucla.edu
(Sendmail 5.61d+YP/3.21) id AA08125;
Sun, 25 Apr 93 12:07:02 -0700
Message-Id: <9304251907.AA08125@maui.cs.ucla.edu>
To: qed@mcs.anl.gov
Subject: Re: Incentive to write proofs
Date: Sun, 25 Apr 93 12:06:58 PDT
From: chou@CS.UCLA.EDU
Sender: qed-owner
Precedence: bulk
I guess everyone who's done machine-assisted theorem proving would
agree with John Harrison and Josh Guttman on its educational value.
But, is there any funding agency which is willing to fund such a
grand proposal as QED just because of its "educational value"?
- Ching Tsun
From qed-owner Sun Apr 25 17:28:46 1993
Received: by antares.mcs.anl.gov id AA08501
(5.65c/IDA-1.4.4 for qed-outgoing); Sun, 25 Apr 1993 17:16:25 -0500
Received: from netcom2.netcom.com by antares.mcs.anl.gov with SMTP id AA08494
(5.65c/IDA-1.4.4 for ); Sun, 25 Apr 1993 17:16:22 -0500
Received: by netcom2.netcom.com (5.65/SMI-4.1/Netcom)
id AA28228; Sun, 25 Apr 93 15:16:35 -0700
Date: Sun, 25 Apr 93 15:16:35 -0700
From: val@netcom.com (Dewey Val Schorre)
Message-Id: <9304252216.AA28228@netcom2.netcom.com>
To: qed@mcs.anl.gov
Subject: Document -- What's Done and What's to be Done
Sender: qed-owner
Precedence: bulk
Suppose we jointly produce a document, sitting in the ftp area beside
the manifesto, stating what part of mathematics has already been done
in
existing theorem provers, and what ought to be done. There would be a
section for each of the current provers, and a section for what ought
to
be done in QED.
David McAlister suggests that if something has been done in one prover
it should be easy to translate over to another. This is because all
the
vagnesses would have been removed. After we see what has been done in
each of the provers, we will see how hard it is to translate this to
the
other provers.
Could the proof of Goedel's theorem that Shankar did in nqthm be
easily
translated to HOL? Someone could write nqthm as a tactic in HOL. Now
that doesn't sound very easy. A translator from nqthm sorce language
to
HOL would be the easier step.
Even after this were done, would the classists be happy with the
results? Wouldn't they think there should be quantifiers in the
statement of the theorem to make it readable?
An automatic translation from one prover to another would probably
work
in some cases, and in others a complete rewrite would be necessary,
refering to the machine proof only in cases where jorunal proof were
vague.
The production of this document does not preclude our simultaneously
working on the base logic or the machine math source language. In fact
the production of this document is not even contrary to Konrad Slind's
cold water view if you consider the "ought to be done" section as
applying to any theorem proving system and not just to the proposed
new
QED.
Val
From qed-owner Sun Apr 25 20:58:41 1993
Received: by antares.mcs.anl.gov id AA10319
(5.65c/IDA-1.4.4 for qed-outgoing); Sun, 25 Apr 1993 20:47:49 -0500
Received: from arp.anu.edu.au by antares.mcs.anl.gov with SMTP id AA10310
(5.65c/IDA-1.4.4 for ); Sun, 25 Apr 1993 20:47:41 -0500
Received: by arp.anu.edu.au id AA09211
(5.65c/IDA-1.4.2.9 for qed@mcs.anl.gov); Mon, 26 Apr 1993 11:47:32 +1000
Received: from Messages.7.15.N.CUILIB.3.45.SNAP.NOT.LINKED.arp.anu.edu.au.sun4.41
via MS.5.6.arp.anu.edu.au.sun4_41;
Mon, 26 Apr 1993 11:47:31 +1000 (EST)
Message-Id: <0fqnwnqKmlE2MUS1dr@arp>
Date: Mon, 26 Apr 1993 11:47:31 +1000 (EST)
From: Zdzislaw Meglicki
To: qed@mcs.anl.gov
Subject: Re: Boyer's Hypothesis
Sender: qed-owner
Precedence: bulk
In <9304251444.AA06798@malabar.mitre.org> jt@linus.mitre.org writes:
> If it turns out that the Boyer constant for proofs in these areas [i.e.,
> "real life" mathematics done in the last 30 years] is 10000, then much
> of what we are talking about in this forum about "doing real
> mathematics" is highly speculative and may border on self-deception.
But, this is exactly where systems such as "Analytica" by E. Clarke and
X. Zhao (CMU-CS-92-117) come in. The reason why "real life" mathematical
proofs are so short is because they are not carried out all the way down
to the logical foundations. Instead, they utilize already accumulated
knowledge and prove things by reducing them to other already proven
theorems. The pyramid of structures that has been built in this way over
the last couple of hundred years is very high indeed, but this is also
the most efficient way of doing things: "seeing further by standing on
the shoulders of the giants". The QED system will have to incorporate
somehow this way of doing things if it is at all to be workable and
useful.
In the "Analytica" report you'll find examples of various theorems from
mathematical analysis and their mechanically constructed proofs - the
culmination of the paper is a mechanical proof of Weierstrass' example
of a continuous nowhere differentiable function. The proofs delivered by
the system are short, quite comparable to similar proofs that would be
produced by humans and perfectly readable. So the Boyer constant is
*not* 10,000 in this case: it is about 1. It was possible to achieve
that because the prover was built on the basis of a symbolic computation
system for manipulating mathematical formulas already equipped in a very
formidable data base of mathematical knowledge.
What QED could add to a system like that would be a sound logical
foundation. I.e., every one of the facts stored in its equally
formidable data base would itself be a theorem with an accompanying
proof. Although the proofs should not go all the way down to the
underlying logic - rather, they should try to reduce problems to already
proven theorems, the way it is done in man-made mathematics - by
combining proofs of theorems on lower levels it should ultimately be
possible to reconstruct mechanically the whole proof tree with its
leaves at the level of the basic logic (although I am not sure why
anybody would ever want to do that - apart from just demonstrating that
it can be done).
Zdzislaw Gustav Meglicki, gustav@arp.anu.edu.au,
Automated Reasoning Program - CISR, and Plasma Theory Group - RSPhysS,
The Australian National University, G.P.O. Box 4, Canberra, A.C.T., 2601,
Australia, fax: (Australia)-6-249-0747, tel: (Australia)-6-249-0158
From qed-owner Sun Apr 25 21:32:20 1993
Received: by antares.mcs.anl.gov id AA10627
(5.65c/IDA-1.4.4 for qed-outgoing); Sun, 25 Apr 1993 21:21:07 -0500
Received: from arp.anu.edu.au by antares.mcs.anl.gov with SMTP id AA10620
(5.65c/IDA-1.4.4 for ); Sun, 25 Apr 1993 21:21:01 -0500
Received: by arp.anu.edu.au id AA09495
(5.65c/IDA-1.4.2.9 for qed@mcs.anl.gov); Mon, 26 Apr 1993 12:20:56 +1000
Received: from Messages.7.15.N.CUILIB.3.45.SNAP.NOT.LINKED.arp.anu.edu.au.sun4.41
via MS.5.6.arp.anu.edu.au.sun4_41;
Mon, 26 Apr 1993 12:20:56 +1000 (EST)
Message-Id:
Date: Mon, 26 Apr 1993 12:20:56 +1000 (EST)
From: Zdzislaw Meglicki
To: qed@mcs.anl.gov
Subject: Re: Incentive to write proofs
Sender: qed-owner
Precedence: bulk
In <9304251907.AA08125@maui.cs.ucla.edu> chou@CS.UCLA.EDU writes:
> I guess everyone who's done machine-assisted theorem proving would agree
> with John Harrison and Josh Guttman on its educational value. But, is
> there any funding agency which is willing to fund such a grand proposal
> as QED just because of its "educational value"?
Few areas of science are as important to intellectual development of the
society as a whole and to its industrial and technological prowess as
mathematics. Societies which neglect or just dump mathematics very
quickly begin to lag behind other more mathematically aware societies
and cultures. After all it was the developments in mathematics, physics,
chemistry and engineering that gave the Europeans the technological edge
which provided them with the means to colonize other cultures and
countries in XVIII and XIX centuries. This point was made very
succinctly by Abdus Salam (Nobel prize for electroweak interactions
together with Weinberg and Glashow) in his conversation with Zhou En
Lai. But even without delving into past ages the importance of
mathematics and the high cost of its neglect can be easily ascertained
by observing what has happened in some Western societies over the last
30 years. The cultural revolution of the 60s and 70s resulted in
diminished emphasis on mathematics and other so called "hard sciences"
in primary and secondary schools in many countries. Almost all of them
paid for this with faltering economies and diminished technological
innovation in comparison with countries in which the "Hippie cultural
revolution" did not have such a devastating effect (e.g., Japan, France,
Germany, Italy, Singapore). For such reasons it may be far easier to
obtain funding for QED than Chou's note would suggest. There are many
people in high technology industries and in management of those
industries who may see such a project as a worthwhile long term
investment in the future of their own industries and their
competitiveness. The governments may be equally interested in fostering
mathematical culture. I don't see this as a hopeless dream.
Zdzislaw Gustav Meglicki, gustav@arp.anu.edu.au,
Automated Reasoning Program - CISR, and Plasma Theory Group - RSPhysS,
The Australian National University, G.P.O. Box 4, Canberra, A.C.T., 2601,
Australia, fax: (Australia)-6-249-0747, tel: (Australia)-6-249-0158
From qed-owner Sun Apr 25 21:57:11 1993
Received: by antares.mcs.anl.gov id AA10829
(5.65c/IDA-1.4.4 for qed-outgoing); Sun, 25 Apr 1993 21:45:19 -0500
Received: from bos3a.delphi.com by antares.mcs.anl.gov with SMTP id AA10822
(5.65c/IDA-1.4.4 for ); Sun, 25 Apr 1993 21:45:16 -0500
Received: from bix.com by delphi.com (PMDF #3207 ) id
<01GXG0WNALI88ZE2EU@delphi.com>; Sun, 25 Apr 1993 22:44:43 EDT
Date: 25 Apr 1993 22:44:11 -0400 (EDT)
From: mthayer@BIX.com
Subject: Boyer's Constant
To: qed@mcs.anl.gov
Message-Id: <9304252244.memo.3955@BIX.com> Sun, 25 Apr 1993 22:44:12 -0400 (EDT)
Content-Transfer-Encoding: 7BIT
X-Cosy-To: qed@mcs.anl.gov
Sender: qed-owner
Precedence: bulk
Javier Thayer expresses his concern about the size of Boyer's
Constant, and says:
>Unfortunately, I do not have a clue how one would answer
>this question in a convincing way other than by
>experimentation -- which seems to go against the
>aprioristic program suggested earlier in this discussion
>by several contributors.
There is some evidence from the psychological literature
that
BC = 7^n
where the exponent n is the number of layers of abstraction
between the base language and the target mathematical
language. This is a good a priori estimate,which, coupled
with Javier's first concern that:
>then much of what we are talking about in this forum about
>"doing real mathematics" is highly speculative and may border
>on self-deception.
leads me to believe that experimentation is highly necessary,
or that the foundational leanings of many of the group MAY be
highly misplaced.
The alternative of doing various areas of mathematics in a
language and symbolism very close to that in current use
(as suggested by e.g. Victor Yodaiken) would leave
us with a (possibly) large number of little theories, each
expressed in their own terms. Supposing that the
substitutional apparatus of the base system is VERY strong,
its other logical resources could be more limited.
The advantage of this sort of mulit-lingual approach would
be that it
1) would be allow experimental determination of the size
of Boyer's Constant
2) would resemble the way in which mathematics is normally
done, while
3) allowing for the (possible) discovery of principles common
to all the little theories.
Michael Thayer
============================================================
From qed-owner Sun Apr 25 22:13:28 1993
Received: by antares.mcs.anl.gov id AA10971
(5.65c/IDA-1.4.4 for qed-outgoing); Sun, 25 Apr 1993 22:02:13 -0500
Received: from swan.cl.cam.ac.uk by antares.mcs.anl.gov with SMTP id AA10963
(5.65c/IDA-1.4.4 for ); Sun, 25 Apr 1993 22:02:03 -0500
Received: from auk.cl.cam.ac.uk (user jrh (rfc931)) by swan.cl.cam.ac.uk
with SMTP (PP-6.5) to cl; Mon, 26 Apr 1993 04:01:31 +0100
To: Zdzislaw Meglicki
Cc: qed@mcs.anl.gov
Subject: Re: Boyer's Hypothesis
In-Reply-To: Your message of Mon, 26 Apr 93 11:47:31 +1000. <0fqnwnqKmlE2MUS1dr@arp>
Date: Mon, 26 Apr 93 04:01:25 +0100
From: John Harrison
Message-Id: <"swan.cl.cam.:074970:930426030144"@cl.cam.ac.uk>
Sender: qed-owner
Precedence: bulk
Zdzislaw Meglicki writes (re. Analytica):
> What QED could add to a system like that would be a sound logical
> foundation. I.e., every one of the facts stored in its equally
> formidable data base would itself be a theorem with an accompanying
> proof. Although the proofs should not go all the way down to the
> underlying logic - rather, they should try to reduce problems to already
> proven theorems, the way it is done in man-made mathematics - by
> combining proofs of theorems on lower levels it should ultimately be
> possible to reconstruct mechanically the whole proof tree with its
> leaves at the level of the basic logic (although I am not sure why
> anybody would ever want to do that - apart from just demonstrating that
> it can be done).
I don't question that it will be necessary to organize the system
hierarchically. However it may be inevitable that theorems become
more complicated when stated rigorously. If you think sufficiently
precisely about some mathematical concepts there can be layers of
detail that are very difficult to hide. (The status of partial functions
applied to terms outside their domains is a particularly interesting
example, I think.)
John Harrison (jrh@cl.cam.ac.uk).
From qed-owner Sun Apr 25 22:20:26 1993
Received: by antares.mcs.anl.gov id AA11027
(5.65c/IDA-1.4.4 for qed-outgoing); Sun, 25 Apr 1993 22:09:17 -0500
Received: from linus.mitre.org by antares.mcs.anl.gov with SMTP id AA11019
(5.65c/IDA-1.4.4 for ); Sun, 25 Apr 1993 22:09:15 -0500
Received: from malabar.mitre.org by linus.mitre.org (5.61/RCF-4S)
id AA09469; Sun, 25 Apr 93 23:09:13 -0400
Posted-Date: Sun, 25 Apr 93 23:09:09 -0400
Received: by malabar.mitre.org (5.61/RCF-4C)
id AA07147; Sun, 25 Apr 93 23:09:10 -0400
Message-Id: <9304260309.AA07147@malabar.mitre.org>
To: qed@mcs.anl.gov
Subject: Re: Analytica and Boyer's Hypothesis
Date: Sun, 25 Apr 93 23:09:09 -0400
From: jt@linus.mitre.org
Sender: qed-owner
Precedence: bulk
--------
Zdzislaw Meglicki gives Analytica as an example of a theorem prover
which cooks up proofs having Boyer constant 1. For example, in his
contribution he states:
...the culmination of the paper is a mechanical proof of Weierstrass'
example of a continuous nowhere differentiable function.
But in my view his remarks fail to address my concern about the Boyer
constant for more recent mathematics. For example could one do Hans
Lewy's famous example of a PDE without a local solution, an example
dating from the mid-1950's, via a reasonable Boyer number? (Forget
about anything fancy, like figuring out when local solutions exist!)
Moreover, some of the strength of Analytica relies on its use of
algorithms it gets from Mathematica (eg. Gosper's algorithm, algebraic
simplification etc). This is not meant to in any way belittle the
importance of the work of Clarke and Zhao. Quite the contrary, the
fact Analytica uses algorithms of this kind, is in my view, an
intelligent design decision which has provided us with a great deal of
useful empirical evidence. However, one should be very cautious of
building a general purpose theorem prover on top of a system such as
Mathematica, which has an enormous number of documented "misfeatures."
I repeat my concern voiced earlier that we need to carefully
understand more proofs with a view to determining (or estimating) their
Boyer number.
Javier Thayer
From qed-owner Mon Apr 26 07:51:06 1993
Received: by antares.mcs.anl.gov id AA16029
(5.65c/IDA-1.4.4 for qed-outgoing); Mon, 26 Apr 1993 07:40:23 -0500
Received: from sun2.nsfnet-relay.ac.uk by antares.mcs.anl.gov with SMTP id AA16019
(5.65c/IDA-1.4.4 for ); Mon, 26 Apr 1993 07:40:20 -0500
Via: uk.ac.edinburgh.aifh; Mon, 26 Apr 1993 13:39:42 +0100
Received: from etive.aisb.ed.ac.uk by aisb.ed.ac.uk; Mon, 26 Apr 93 13:39:35 BST
Date: Mon, 26 Apr 93 13:39:34 BST
Message-Id: <28659.9304261239@etive.aisb.ed.ac.uk>
From: Alan Bundy
Subject: Re: Boyers Hypothesis
To: jt@linus.mitre.org
In-Reply-To: jt@org.mitre.linus's message of Sun, 25 Apr 93 10:44:30 -0400
Phone: 44-31-650-2716
Fax: 44-31-650-6516
Fcc: +qed.mai
Cc: qed@aisb.ed.ac.uk
Sender: qed-owner
Precedence: bulk
Javier
> BOYER's HYPOTHESIS
>
> For any theorem the length l_c of its computer proof is not significantly
> larger than the length l_i of its rigourous (but possibly informal)
> proof.
There seems to be an implict assumption behind your and
other's uses of the Boyer hypothesis that a computer proof has a
unique Boyer number. But this is not true of tactic based
provers. The whole proof may be generated by a single tactic.
This big tactic may break down into smaller ones and so on until
eventually it is unpacked into into applications of rules of
inference of the base logic. It may be possible to read the proof
at many different levels, eg a 1 step proof at the top level, a
20,000 step proof at the bottom level. If the tactics are
well-designed then the most congenial level to read it may be at
some intermediate tactic level and this may correspond well with
the rigorous proof, or with some higher level informal proof.
It may also be possible to input the proof at this higher
level as well as read it there.
Alan
From qed-owner Mon Apr 26 07:51:33 1993
Received: by antares.mcs.anl.gov id AA16022
(5.65c/IDA-1.4.4 for qed-outgoing); Mon, 26 Apr 1993 07:40:21 -0500
Received: from sun2.nsfnet-relay.ac.uk by antares.mcs.anl.gov with SMTP id AA16014
(5.65c/IDA-1.4.4 for ); Mon, 26 Apr 1993 07:40:10 -0500
Via: uk.ac.edinburgh.aifh; Mon, 26 Apr 1993 13:28:32 +0100
Received: from etive.aisb.ed.ac.uk by aisb.ed.ac.uk; Mon, 26 Apr 93 13:28:24 BST
Date: Mon, 26 Apr 93 13:28:23 BST
Message-Id: <28642.9304261228@etive.aisb.ed.ac.uk>
From: Alan Bundy
Subject: Re: Document -- What's Done and What's to be Done
To: Dewey Val Schorre
In-Reply-To: Dewey Val Schorre's message of Sun, 25 Apr 93 15:16:35 -0700
Phone: 44-31-650-2716
Fax: 44-31-650-6516
Fcc: +qed.mai
Cc: qed@aisb.ed.ac.uk
Sender: qed-owner
Precedence: bulk
Val
> Someone could write nqthm as a tactic in HOL.
That is essentially what we have been trying to do in the
Clam/Oyster system (see Carolyn Talcott's list). Clam contains
tactics based on Nqthm. These drive Oyster. Oyster is a copy of
Nuprl, but it could have been HOL, and at one time we had a
project with a student of Mike Gordon's to have Clam drive HOL.
Adaption to other tactic based provers would not be too
difficult.
Alan
From qed-owner Mon Apr 26 09:02:41 1993
Received: by antares.mcs.anl.gov id AA16984
(5.65c/IDA-1.4.4 for qed-outgoing); Mon, 26 Apr 1993 08:47:00 -0500
Received: from life.ai.mit.edu by antares.mcs.anl.gov with SMTP id AA16976
(5.65c/IDA-1.4.4 for ); Mon, 26 Apr 1993 08:46:57 -0500
Received: from spock (spock.ai.mit.edu) by life.ai.mit.edu (4.1/AI-4.10) id AA14799; Mon, 26 Apr 93 09:46:54 EDT
From: dam@ai.mit.edu (David McAllester)
Received: by spock (4.1/AI-4.10) id AA02209; Mon, 26 Apr 93 09:46:53 EDT
Date: Mon, 26 Apr 93 09:46:53 EDT
Message-Id: <9304261346.AA02209@spock>
To: qed@mcs.anl.gov
Subject: Expansion Factors
Sender: qed-owner
Precedence: bulk
While I was at Cornell we had many discussions about "expansion
factors" --- how much larger a rigorous informal proof becomes before
it can be machine verified. As noted, tactical systems seem to make
the measurement more difficult --- parts of the proof can be hidden in
special purpose tactics. An honest measure of length would count
everything that you have to add to prove the theorem, including all
the new "general purpose" theorems and tactics. I think one just has
to use one's best judgement when making claims about expansion
factors. One also has to examine any such claim carefully. But I do
think that meaningful measurements can be made. You may have to count
"lines of code" in both proofs and tactics.
As many programmers know, however, lines of code is not always a good
measure of the difficulty of writing a given piece of software (or
proof).
David
From qed-owner Mon Apr 26 12:36:07 1993
Received: by antares.mcs.anl.gov id AA23016
(5.65c/IDA-1.4.4 for qed-outgoing); Mon, 26 Apr 1993 12:28:34 -0500
Received: from altair.mcs.anl.gov by antares.mcs.anl.gov with SMTP id AA23009
(5.65c/IDA-1.4.4 for ); Mon, 26 Apr 1993 12:28:32 -0500
From: Larry Wos
Received: by altair.mcs.anl.gov (4.1/GeneV4)
id AA20030; Mon, 26 Apr 93 12:28:32 CDT
Date: Mon, 26 Apr 93 12:28:32 CDT
Message-Id: <9304261728.AA20030@altair.mcs.anl.gov>
To: qed@mcs.anl.gov
Subject: reactions
Sender: qed-owner
Precedence: bulk
Successes, Theorems, and proofs
Here are some thoughts on topics found in the correspondence.
To me, it is clear why mathematicians do care and will care more in the future.
If the community gets very, very good at proof checking, then progress of
a substantial nature will occur in proof finding.
The reason seems obvious: When checking a proof, one is usually
given a proof with many gaps, gaps to be filled in by the system.
My recent experiments (with McCune's program OTTER)
in the context of proof checking have provided (for me)
ample evidence that gap filling is a tough problem to solve.
Since mathematics is vitally interested in proof finding, even though as yet the role of the computer
does not fascinate many, mathematicians will (when we get even better)
be interested in QED, at least indirectly.
The additional impetus for their interest rests with a marvelous
database of theorems and definitions and proofs, which will be
one of the components of QED.
Anybody who has browsed in the library looking for some needed
lemma or theorem will feel glee at the thought of a good database of
mathematical information, including proofs.
In my research, even at the beginning in 1963, I find the examination
of proofs and of failures to get proofs often leads to significant advances.
In that regard, and in partial answer to a recent question,
we maintain a database of input and output files, proofs, theorems, and
commentary, accessible by anonymous FTP.
We have proved theorems from group theory, ternary Boolean algebra, algebraic geometry,
topology, Tarskian geometry, and various logic calculi.
Indeed, with OTTER (and its predecessors), we have answered a number of open questions.
As for the underlying logic, I cannot avoid my bias:
I am deeply committed to FOPC and, more specifically,
to the clause language.
I believe, without sufficient data, that the richer the language, the more difficult it is to control.
Since my interest is in proof finding, and since I am certain that
much strategy is required for attacking deep questions,
I vastly prefer the clause language: It works better
than anything of which I know--sorry for the bias based on the group in which I work.
More generally, I prefer set theory, and currently strongly
suspect that the various higher-order approaches offer far less
promise than does the FOPC approach.
Extremely encouraging to me is the fact that mathematicians
such as I. J. Kaplansky as early as 1980 thought the activity
of automated reasoning important.
Currently, the reaction to a paper appearing in the January Notices
on automated reasoning (in Keith Devlin's column) has produced
a promising increase in interest.
Clearly, the New York Times article of say July 1991 is nonsense:
We can and do produce proofs of interest to mathematicians and
proofs that they can and d learn from.
LW
From qed-owner Thu Apr 29 07:31:30 1993
Received: by antares.mcs.anl.gov id AA11142
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 29 Apr 1993 06:31:11 -0500
Received: from dcs.ed.ac.uk (stroma.dcs.ed.ac.uk) by antares.mcs.anl.gov with SMTP id AA11134
(5.65c/IDA-1.4.4 for ); Thu, 29 Apr 1993 06:31:01 -0500
Received: from harris.dcs.ed.ac.uk by dcs.ed.ac.uk id aa07431;
29 Apr 93 12:29 BST
From: dts@dcs.ed.ac.uk
Date: Thu, 29 Apr 93 12:29:37 BST
Message-Id: <8849.9304291129@harris.dcs.ed.ac.uk>
To: qed@mcs.anl.gov
In-Reply-To: Konrad Slind's message of Fri, 23 Apr 1993 03:04:55 +0200 <93Apr23.030459met_dst.8079@sunbroy14.informatik.tu-muenchen.de>
Subject: Re: Cold water
Sender: qed-owner
Precedence: bulk
Although the goals of QED seem exciting and worthy, I find myself agreeing
with Konrad Slind to the extent that I believe the potential benefit of
developing a repository of verified software (and/or hardware) components,
together with their proofs, is far more obvious than that of developing
a body of formally checked mathematics. From a technical point of view,
this also seems to be a more achievable goal. I don't mean to suggest that
QED is a bad idea; I just wish that the same enthusiasm could be generated
for doing the same thing with software/hardware.
Don Sannella
From qed-owner Thu Apr 29 08:35:57 1993
Received: by antares.mcs.anl.gov id AA12134
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 29 Apr 1993 08:22:50 -0500
Received: from linus.mitre.org by antares.mcs.anl.gov with SMTP id AA12127
(5.65c/IDA-1.4.4 for ); Thu, 29 Apr 1993 08:22:48 -0500
Received: from circe.mitre.org by linus.mitre.org (5.61/RCF-4S)
id AA05828; Thu, 29 Apr 93 09:22:40 -0400
Posted-Date: Thu, 29 Apr 93 09:22:38 -0400
Received: by circe.mitre.org (5.61/RCF-4C)
id AA12381; Thu, 29 Apr 93 09:22:39 -0400
Message-Id: <9304291322.AA12381@circe.mitre.org>
To: qed@mcs.anl.gov
Subject: Re: Cold water
In-Reply-To: dts@dcs.ed.ac.uk's message of "Thu, 29 Apr 93 12:29:37 BST."
<8849.9304291129@harris.dcs.ed.ac.uk>
X-Postal-Address: MITRE, Mail Stop A156 \\ 202 Burlington Rd. \\ Bedford, MA 01730
X-Telephone-Number: 617 271 2654; Fax 617 271 3816
Date: Thu, 29 Apr 93 09:22:38 -0400
From: guttman@linus.mitre.org
Sender: qed-owner
Precedence: bulk
I think we should look for the first major benefits of
machine-supported rigorous mathematics will be in mathematics
education.
And that means that systems should be very serious about issues such
the ability to take large ("human-sized") proof steps and having a
style of formalization and proof that is very close to the ethos of
traditional rigorous mathematics.
Just as the big pay-off of formal methods for software (and/or
hardware) depend on the normal, well-educated developer being able to
understand how to use them, the impact of formal methods for
mathematics depends on being understandable to a broad range of people
with the normal kind of mathematical culture.
Josh
From qed-owner Thu Apr 29 19:03:02 1993
Received: by antares.mcs.anl.gov id AA25366
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 29 Apr 1993 19:01:18 -0500
Received: from arp.anu.edu.au by antares.mcs.anl.gov with SMTP id AA25358
(5.65c/IDA-1.4.4 for ); Thu, 29 Apr 1993 19:01:05 -0500
Received: by arp.anu.edu.au id AA03215
(5.65c/IDA-1.4.2.9 for qed@mcs.anl.gov); Fri, 30 Apr 1993 10:00:47 +1000
Received: from Messages.7.15.N.CUILIB.3.45.SNAP.NOT.LINKED.arp.anu.edu.au.sun4.41
via MS.5.6.arp.anu.edu.au.sun4_41;
Fri, 30 Apr 1993 10:00:47 +1000 (EST)
Message-Id:
Date: Fri, 30 Apr 1993 10:00:47 +1000 (EST)
From: Zdzislaw Meglicki
To: qed@mcs.anl.gov
Subject: Re: Cold water
Sender: qed-owner
Precedence: bulk
In <8849.9304291129@harris.dcs.ed.ac.uk> dts@dcs.ed.ac.uk writes:
> I believe the potential benefit of developing a repository of verified
> software (and/or hardware) components, together with their proofs, is
> far more obvious than that of developing a body of formally checked
> mathematics. From a technical point of view, this also seems to be a
> more achievable goal.
There is no reason why such a project should not be established. There
would certainly be enough electronic and software engineers interested
in it and it could certainly get a lot of financing from the industry
too. But this is also the reason why such a project isn't worth fighting
for: it's a free-wheeler - it will become established sooner or later
anyway with everybody's blessing. Mathematics, on the other hand, is a
different game. Most people just put it in the "too hard" basket. A
grand and widely publicised project like QED could greatly help in
getting it out of there and making it a part of every day life. It would
also greatly help in "making computers think". Mathematics in this
respect is more challenging than just software and hardware verification
because its concepts and language are so much richer while still formal
and therefore, at least in principle, amenable to automation. A working
QED system would also be more broadly applicable than software and
hardware verification systems.
Zdzislaw Gustav Meglicki, gustav@arp.anu.edu.au,
Automated Reasoning Program - CISR, and Plasma Theory Group - RSPhysS,
The Australian National University, G.P.O. Box 4, Canberra, A.C.T., 2601,
Australia, fax: (Australia)-6-249-0747, tel: (Australia)-6-249-0158
From qed-owner Thu Apr 29 19:24:25 1993
Received: by antares.mcs.anl.gov id AA25678
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 29 Apr 1993 19:23:13 -0500
Received: from tuminfo2.informatik.tu-muenchen.de by antares.mcs.anl.gov with SMTP id AA25666
(5.65c/IDA-1.4.4 for ); Thu, 29 Apr 1993 19:23:08 -0500
Received: from sunbroy14.informatik.tu-muenchen.de ([131.159.0.114])
by tuminfo2.informatik.tu-muenchen.de
with SMTP id <57668>; Fri, 30 Apr 1993 02:22:59 +0200
Received: by sunbroy14.informatik.tu-muenchen.de id <8088>; Fri, 30 Apr 1993 02:22:54 +0200
From: Konrad Slind
To: qed@mcs.anl.gov
Subject: getting a formal education
Message-Id: <93Apr30.022254met_dst.8088@sunbroy14.informatik.tu-muenchen.de>
Date: Fri, 30 Apr 1993 02:22:50 +0200
Sender: qed-owner
Precedence: bulk
I agree with John Harrison and Josh Guttman: students seem like a good
initial target for a QED-like project. For one thing, their requirements
are certainly more well-understood (and uniform!) than those of working
mathematicians: students need to pass their courses. To do that they
have to understand the material in the course. The material has already
been defined by others: all we formal methodistas need to do is provide
that material, making sure that it is consistent and accurate (which is
our stock in trade after all). We can provide it for free and it can be
everywhere, thanks to things like the internet.
Josh writes:
> And that means that systems should be very serious about issues such
> the ability to take large ("human-sized") proof steps and having a
> style of formalization and proof that is very close to the ethos of
> traditional rigorous mathematics.
Yes. To echo a message from Victor Yodaiken, the formal presentation of
a classic text at the original author's level would be a milestone well
worth working towards. I would suggest staying in the realm of computer
science, and look forward to a "Hopcroft and Ullman Made Formal"
appearing soon at an ftp site near me.
I guess there might be a legal problem too - would this violate
copyright?
The formalization of such a course raises a couple of problems:
1. Rules of inference too powerful.
Could it be the case that one wants the student to get practice in
proving theorems by a certain method, but the automatic or pervasive
methods of the proof system solve the goals in one step?
2. Opacity of rules.
At a difficult point in a proof the student may want to examine the
proof in ever-increasing detail. Theorem proving systems that provide a
few "heavyweight" decision procedures as their basic rules seem not to
be amenable to such decomposition.
Konrad.
From qed-owner Thu Apr 29 20:34:31 1993
Received: by antares.mcs.anl.gov id AA26693
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 29 Apr 1993 20:33:15 -0500
Received: from tuminfo2.informatik.tu-muenchen.de by antares.mcs.anl.gov with SMTP id AA26685
(5.65c/IDA-1.4.4 for ); Thu, 29 Apr 1993 20:33:09 -0500
Received: from sunbroy14.informatik.tu-muenchen.de ([131.159.0.114])
by tuminfo2.informatik.tu-muenchen.de
with SMTP id <57690>; Fri, 30 Apr 1993 03:32:58 +0200
Received: by sunbroy14.informatik.tu-muenchen.de id <8088>; Fri, 30 Apr 1993 03:32:43 +0200
From: Konrad Slind
To: Zdzislaw.Meglicki@arp.anu.edu.au
Cc: qed@mcs.anl.gov
In-Reply-To: Zdzislaw Meglicki's message of Fri, 30 Apr 1993 02:00:47 +0200
Subject: Cold water
Message-Id: <93Apr30.033243met_dst.8088@sunbroy14.informatik.tu-muenchen.de>
Date: Fri, 30 Apr 1993 03:32:33 +0200
Sender: qed-owner
Precedence: bulk
I think Zdzislaw Meglicki has it backwards when he writes
> Mathematics in this respect is more challenging than just software and
> hardware verification because its concepts and language are so much
> richer while still formal and therefore, at least in principle, amenable
> to automation. A working QED system would also be more broadly
> applicable than software and hardware verification systems.
Systems like HOL, nuPRL, IMPS, and many others implement logics intended
for the formalization of mathematics. It also happens that they are
successful at doing verification. There's no difference in such systems
between mathematics and verification.
Konrad.
From qed-owner Thu Apr 29 21:26:22 1993
Received: by antares.mcs.anl.gov id AA27369
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 29 Apr 1993 21:23:47 -0500
Received: from arp.anu.edu.au by antares.mcs.anl.gov with SMTP id AA27361
(5.65c/IDA-1.4.4 for ); Thu, 29 Apr 1993 21:23:41 -0500
Received: by arp.anu.edu.au id AA04553
(5.65c/IDA-1.4.2.9 for qed@mcs.anl.gov); Fri, 30 Apr 1993 12:23:36 +1000
Received: from Messages.7.15.N.CUILIB.3.45.SNAP.NOT.LINKED.arp.anu.edu.au.sun4.41
via MS.5.6.arp.anu.edu.au.sun4_41;
Fri, 30 Apr 1993 12:23:36 +1000 (EST)
Message-Id: <4fs8qcKKmlE20=5L0r@arp>
Date: Fri, 30 Apr 1993 12:23:36 +1000 (EST)
From: Zdzislaw Meglicki
To: qed@mcs.anl.gov
Subject: Re: Cold Water
Sender: qed-owner
Precedence: bulk
In <93Apr30.033243met_dst.8088@sunbroy14.informatik.tu-muenchen.de>
slind@informatik.tu-muenchen.de writes:
> I think Zdzislaw Meglicki has it backwards when he writes
> > Mathematics in this respect is more challenging than just software and
> > hardware verification because its concepts and language are so much
> > richer [...]
> Systems like HOL, nuPRL, IMPS, and many others implement logics intended
> for the formalization of mathematics. It also happens that they are
> successful at doing verification. There's no difference in such systems
> between mathematics and verification.
What I had in mind is that if you want to verify a circuit or a computer
program you may be able to get away with a simpler system (and possibly
also a simpler logic) than if you want to talk, say, about symplectic
nondegenerate 2-forms on a 2n-dimensional differential manifold.
Computer programs and digital circuits at least are discrete systems. I
don't know about IMPS, but I am yet to see anyone proving Ahlfors
theorem about complex structure of Teichmuller space being Kahler with
HOL or nuPRL. When you talk about very elementary mathematics (e.g.,
basic logic, some elements of number theory, even some elements of set
theory) then indeed systems for doing that kind of mathematics and
systems for software and hardware verification are very much the same.
But if you'd like to move further in mathematics I think that you will
need more sophisticated and elaborate systems, whereas you probably
won't need that degree of sophistication in verification of discrete
systems such as circuits or programs. Hence my remark about a broader
area of applicability of QED (if it ever goes in that direction). For
example, I could imagine using QED for reasoning about supersymmetries
or about renormalisation in Quantum Electrodynamics (also QED!) but many
of the facilities of QED that would make that possible probably wouldn't
be needed or useful in software and hardware verification (although some
might, who knows).
Zdzislaw Gustav Meglicki, gustav@arp.anu.edu.au,
Automated Reasoning Program - CISR, and Plasma Theory Group - RSPhysS,
The Australian National University, G.P.O. Box 4, Canberra, A.C.T., 2601,
Australia, fax: (Australia)-6-249-0747, tel: (Australia)-6-249-0158
From qed-owner Thu Apr 29 21:58:01 1993
Received: by antares.mcs.anl.gov id AA27779
(5.65c/IDA-1.4.4 for qed-outgoing); Thu, 29 Apr 1993 21:56:20 -0500
Received: from tuminfo2.informatik.tu-muenchen.de by antares.mcs.anl.gov with SMTP id AA27771
(5.65c/IDA-1.4.4 for ); Thu, 29 Apr 1993 21:56:16 -0500
Received: from sunbroy14.informatik.tu-muenchen.de ([131.159.0.114])
by tuminfo2.informatik.tu-muenchen.de
with SMTP id <57668>; Fri, 30 Apr 1993 04:56:02 +0200
Received: by sunbroy14.informatik.tu-muenchen.de id <8088>; Fri, 30 Apr 1993 04:55:35 +0200
From: Konrad Slind
To: Zdzislaw.Meglicki@arp.anu.edu.au
Cc: qed@mcs.anl.gov
In-Reply-To: Zdzislaw Meglicki's message of Fri, 30 Apr 1993 04:23:36 +0200 <4fs8qcKKmlE20=5L0r@arp>
Subject: Cold Water
Message-Id: <93Apr30.045535met_dst.8088@sunbroy14.informatik.tu-muenchen.de>
Date: Fri, 30 Apr 1993 04:55:20 +0200
Sender: qed-owner
Precedence: bulk
Zdzislaw Meglicki gives some examples of advanced mathematics and says
that verification systems might never draw on this knowledge. I agree. I
guess I should make it clear that I am an advocate of doing verification
in a system of logic adequate for the formalization of mathematics.
Trying to get away with a simpler system may be unwise in the long run,
since who knows what models may become important in the future for
verification?
Konrad.
From qed-owner Mon May 3 16:16:07 1993
Received: by antares.mcs.anl.gov id AA16174
(5.65c/IDA-1.4.4 for qed-outgoing); Mon, 3 May 1993 16:05:26 -0500
Received: from aerospace.aero.org by antares.mcs.anl.gov with SMTP id AA16166
(5.65c/IDA-1.4.4 for ); Mon, 3 May 1993 16:05:18 -0500
Received: from antares.aero.org by aerospace.aero.org with SMTP (5.65c/6.0.GT)
id AA20489 for qed@mcs.anl.gov; Mon, 3 May 1993 14:05:03 -0700
Posted-Date: Mon, 3 May 93 14:04:58 PDT
Message-Id: <199305032105.AA20489@aerospace.aero.org>
Received: from calamari.aero.org by antares.aero.org (4.1/AMS-1.0)
id AA02432 for qed@mcs.anl.gov; Mon, 3 May 93 14:05:00 PDT
Date: Mon, 3 May 93 14:04:58 PDT
From: cal@aero.org
To: qed@mcs.anl.gov
Subject: QED Manifesto - Notes and Comments
Sender: qed-owner
Precedence: bulk
file qed930503.txt
first notes on QED discussion
I have read the QED Manifesto and the subsequent discussion with much interest
and not a little horror.
I believe that the project is fundamentally misguided, and that even if one
ignores that, the designs under discussion will be inadequate to the task.
Ok, so enough for the flames; now for some serious comment. First on the
manifesto itself, then on some of the subsequent commentary.
If I'm being too sarcastic at times, then I apologize, but it just irritates
me to see that naivete about "the way to do mathematics". I have no doubt
that each of you who write "_the_ way math is (or should be) being done" is
right about _a_ way it is being done. My complaint is just that none of you
take into account the enormously different styles of methods that are in use.
None of those methods is exclusive, and mathematics is being done in an
amazingly large variety of ways, sometimes by the very same person. It is
this variation of methods that QED must support in order to gain acceptance
from the mathematical community.
As to my use of "misguided", well, I think QED is trying to formalize an
inherently non-formalizable process; that of discovering new mathematics (even
though that is expressly not a goal of QED, I believe that it should be, since
otherwise QED is a pointless exercise). The real world is out there to foil
any attempt at completely severing mathematics from external influences
(though that is exactly what formalization is). The attempt is still
worthwhile, but we should all keep in mind the quote M.Thayer used: "God has
more disk space than we do", which I take to mean only that reality seems to
be much more diverse than any of our formal mechanisms can handle adequately.
The strong emotional reaction we see here results from the excessive reliance
of some people, myself included, on a notion of "truth" to provide life's
certainties, and on mathematics to provide that notion of truth. This is not
a technical issue, but it pervades the discussion of QED.
Comments on Manifesto
Motivations
I agree with some of the situation description in the first reason, about the
increase of mathematical knowledge. Of course, most of that increase is in
areas known to be "irrelevancies", with a very small theorem-to-definition
ratio, and I am not convinced that even publishing those results in the usual
way is useful. By the way, I can't quite decide why the theorem-to-definition
ratio matters to me. It is something about getting out more than you put in.
It seems that good mathematics should have a high ratio (yes, I know that the
ratio is not meaningfully well-defined, but there is some notion there that is
important).
Problem solving and theorem proving are only one of the motivations for doing
mathematics. Another is less goal-directed, a kind of exploration of new (or
old) mathematical territory. The criteria for success here are harder to
elaborate, because the time scale is different (e.g., Levi-Civita's
development of tensors preceded their use in relativity; there are many other
examples).
Despite the phrase "knowledge and techniques" in the first sentence of the
manifesto, all of the subsequent discussion of the mathematics itself is about
theorems and proving theorems. The methods of a mathematical theory are often
more important than the theorems anyway, since it is often only the methods
that generalize across theories.
I believe that the expected (rightly, I think) enormous sizes of "internal
documentation of ... " and "specifications of ..." represent failures of
imagination and abstraction, and simply follow from the amount of thought and
money typically allowed for their construction. Also, I think they partly
reflect the inadequacy of the available software tools to support computation
with abstractions.
I wholly support the third motivation, for education, and hope that there is
LOTS more study of the kinds of tools that must be available in such a system;
no present system comes close, regardless of subject matter. Some of my own
research is relevant here, in the area of software support of complex systems
[Landauer-Bellman92], [Landauer-Bellman93].
The fourth motivation is, in my mind, neither beautiful nor compelling. It is
simply wrong-headed.
In the history of mathematics, we have seen many instances when the very
foundations of certain areas of mathematics have changed (calculus by
infinitesimals to calculus by epsilons and deltas, Riemann's "hypotheses" vs.
Kronecker(?)'s "facts" that underly the foundations of Geometry; Cantor's
notion of infinite vs. "dass ist nicht Mathematik; dass ist Theologie"; the
axiomatic method itself; some less well-known examples are Aczel's set theory
with an anti-foundation axiom [Aczel88]; Chaitin's complexity theory
[Chaitin93] and Barwise's situation theory [Barwise89] are possibly other
examples), sometimes when new application areas (such as natural language) are
treated. I believe that mathematics will need further changes to treat
computing adequately (we are still stuck with the mathematics that derives
from our understanding of compilation; there are harder problems we have
ignored), and to treat the full complexity of biological systems at all.
In a sense, we have started in the middle somewhere, as in physics, with
concepts and methods that we can comprehend in some direct way. Then we built
structures on top of that, and we also built structures underneath that make
us feel like the others are well supported. Unlike physics, however, our
criterion for success in mathematics is not just the application of methods to
"the real world"; it is also (or perhaps predominantly) in applications of
those methods to other parts of mathematics. Methods that unify and simplify
several parts of mathematics at once are successful; group theory and category
theory are two well-known examples of that process, and another perhaps more
obscure one is the use of sheaves and then schemes to redo the basics of
algebraic geometry.
I think this foundational aspect is a key to the motivation for QED, and to
its expected failure. We become uncomfortable when there is no foundation
(set of bottom elements, concrete objects, atoms, fundamental concepts, or
some other term that makes us think we understand these basic objects) from
which we can build the defined structures. And yet we can reason with the
unfounded elements quite well, if not quite without trepidation. What
mathematics has done in the past is to build a rigid structure (well, actually
several rigid structures) somewhere in the middle, with tentacles leading up
into the large and down into the small. As long as we work in that structure,
we are comfortable, but when we get to the boundaries at either end, we have
difficulty. As the manifesto says, in the Reply to Objection 7, "The QED
system will ... perhaps ... never [be used] in the discovery of ... deep
results." This shows a recognition that the ends of the structure are
difficult. It is true, though, that the foundational issues are the most
difficult. For instance, all of the examples above are difficulties at the
bottom of this structure.
Now QED wants to build such a structure, which is perfectly reasonable. But
also the manifesto describes it as though that structure will be
foundationally complete (yes, this is an undefined term, but it means that
there is no further foundation needed), and I think that attitude is wrong. I
believe mathematics will follow the same history as physics, namely, that
there is no ultimate foundation, only plateaus on which we seem to rest for a
while, until we examine them closely enough. This is why I expect QED, as it
is described in the manifesto, to be wishful thinking. I think the idea could
be salvaged by changing the expressed attitude about mathematical foundations,
but that is always difficult.
The fifth motivation is too militaristic for me. If the thought police
return, they will be the ones who have QED, not us. Moreover, it is
well-known that what constitutes a proof is at least partly a social
phenomenon (witness Hilbert's use of a form of Mathematical Induction that
does not use successive integers to prove the Prime Ideal Theorem; that style
of inference was new to him and not well-accepted for a while; witness also
Haken and Appelt's proof of the 4-color theorem, which is still not completely
accepted), and that flexibility must be maintained for mathematics to progress.
Just to throw another grenade into the discussion, I also think that
self-reference is important. We seem to find potentially infinite towers on
well-founded bases (e.g., the set of non-negative integers) much more
congenial than infinitely descending towers, and when there is self-reference,
we get even more worried.
The tremendous success of the language / meta-language separation in providing
foundational coherence in mathematics and logic has blinded most of its users
to its cost in expressiveness. Instead of working to find logics that allow
study of self-reference, we simply eliminated it altogether, because it could
(and did) cause trouble, as illustrated by various set-theoretic paradoxes
[Barwise-Etchemendy87]. Self-reference has always been somewhat suspect in
mathematics and in logic because of these paradoxes. The only approach I have
seen that even addresses this issue is Barwise's situation theory, which is
being developed in conjunction with new logics of self-reference [Barwise89],
and in fact, a new set theory that supports them [Aczel88] [Barwise-Moss91].
These logics arose from the study of natural language, with its very definite
non-separation of the subject of discourse from the discourse itself (the
chief culprits here are words like ``this'', ``that'', and ``the other
thing''). This new kind of logic both facilitates reasoning with
self-reference and also helps resolve some of the sticky paradoxes of set
theory.
Replies to Objections
It is clear that there are many similar logics in use now. Many people
naively assume the pseudo-"mathematical" ideal of having one base system into
which everything can be converted and compared. I claim first that it can't
be done, and second that it shouldn't be done. First, any formal model is
limiting by definition, so choosing one such model automatically and
irrevocably precludes studying certain things. Second, the well-known
failures of mathematical methods in computing and in biological systems imply
that new and different mathematics are needed. These may require new
inferential mechanisms as well. It is much more important to standardize on
the allowed variability in a system than to standardize on one kind of
expression. Also, if the goal is truly _all_ of mathematics, then all the
different kinds of limited inference mechanisms must be used also, so that one
can easily determine, given a collection of allowed inference mechanisms, what
are and are not theorems and other derived methods. Note also that there will
be new mathematics required in the construction and analysis of such a large
system, so that certain kinds of self-reference will be necessary (in so far
as QED is a formal system, it must be representable within QED's logics; some
careful form of computational reflection will be essential).
I completely agree with the Reply to Objection 2, but would recommend that
results and notations and methods and algorithms also have an associated
pedigree, to keep track of source information. We have found that to be an
invaluable aid to validating the behavior and internal information in a
complex software system. Moreover, it might help reduce the exponentially
growing volume of published mathematics to allow "publication" into QED rather
than paper.
The Reply to Objection 3 does not address the issue raised there at all. I
think that Objection 3 may be correct (though not relevant, since most of the
mathematics that has been published is not relevant as I mentioned above), and
the reply also both correct and irrelevant. The issue is not that we should
do the easy stuff first, but rather how and by what means should the system be
"bootstrapped". I was glad to see that notion mentioned in the discussion.
Objections 4 and 5 are specious. Both are correct; both are irrelevant.
First, one should make a careful distinction between "computationally checked"
and "mechanically checked", and note that neither is possible, since we cannot
formally check the operation of the computers or other mechanisms; they are
objects of the "real world" and mathematics is not. Nonetheless, QED may be
feasible; it seems worth the effort to find out.
The Reply to Objection 6 is ok as far as it goes, but I suggest that some
professional journals would eventually be interested because it would greatly
simplify the problem of relating a new piece of mathematics to the rest of the
field. The repeated failure of technical reviewers to exhibit the requisite
breadth of knowledge is a hint that it is more and more to gain that breadth,
and not just because the subject is growing fast.
Objection 7 is exactly right, and should be carefully taken into account. The
Reply to Objection 7 is naive, both from the notational point of view, and
from the computational point of view. The notion that Tex "demonstrates that
it is no problem for computers to deal with any known mathematical notation"
is ludicrous; it only demonstrates that we can cause the marks to appear on
paper using Tex, not that there is any set of computational methods that can
be used to describe the relationships those marks are intended to denote, or
even to identify what context or conventions are to be used to interpret the
notations.
Mathematicians are amazingly productive in inventing new notations that
convert problems in one domain into problems in another. One scenario for
this progression is that new notations are (1) first used as abbreviations in
terms of other base symbols, then (2) combinations of the accepted rules of
manipulation of the base symbols are gradually found useful or interesting for
manipulating those abbreviations, and finally (3) those manipulations of the
abbreviations become formalized into theories about safe manipulations of the
new notations. It is my opinion that QED must use this same mechanism (and
many others, too) of incorporating new processes that manipulate objects in
one of several different ways, depending on how deep their abbreviation
definitions are.
For a simple example of this abbreviation process, consider the rational
numbers. The expression 1/4 is not a number, it is a problem. It is such a
familiar problem, though, that we know lots of manipulations that we can
safely do with it, most of which were derived from safe manipulations with
integers, which can in turn be derived from safe manipulations with sets (in
one definition of integers as sets that is used to prove Peano's axioms). We
therefore think of 1/4 as a fundamental object, and treat it that way in most
investigations.
The computational naivete lies in the thought that everything will be
translated into a common internal form (an implication which may not be
intended is that it be the same form, as implied by the root logic; simple
syntactic analyses and prettyprinting are, as the reply says, quite feasible).
It has been seen repeatedly in our studies of computational support for
complex systems that there is no one method, approach, notation, language or
even paradigm that can be adequate, so that multiplicities must be allowed
even at the lowest levels of processing [Landauer-Bellman92].
In fact, the last sentence of the reply is one of the horrifying things about
the entire proposal. If QED is not intended to support the development of new
mathematics from the very first deep insights, then I think it will not be
used by anyone except clerks, and that few mathematicians will even notice it,
let alone support it.
Objection 8 is right also. Any piece of code as large as QED must be is
inherently unreliable for many reasons, not the least of which are the
software and hardware base (compilers, operating systems, and processors).
The Reply to Objection 8 is also right. However, it is not the human errors
in proofs that are in question; it is the programming errors in proof
discovery, construction, and verification that are the problem. These errors
will be harder to discover, because humans are less familiar with the
peculiarities of computer programs than they are with the peculiarities of
human reasoning. Still, improved accuracy is likely, after all the presently
accepted body of mathematics is entered and verified.
There are two other parts of the reply that are difficult for me. I hope it
is clear that "expressing the full foundation of the QED system in a few pages
of mathematics" will require some new mathematics, since the expressiveness of
current mathematical systems of notation cannot handle the variability of
components that would be required for such a compact representation.
Lastly, "agreement by experts ... that the definitions are 'right' ... "
smacks of more cliquishness than I like. In every subfield there are some
generally accepted definitions, and some alternative sets of definitions that
are all basically equivalent (sometimes that equivalence is the major result
of the theory). One may usefully choose a preferred definition (for expository
purposes, for example), but the alternatives are as valid and also important
for exploring the relationships among the different concepts in the subfield.
I do think there are some real objections to the concept of QED as stated. My
main non-technical concern is the social or political issue: who decides what
goes into QED? Not every mathematician will play this game, no matter what
the rules are (mathematicians, indeed all humans, are just like that). This
objection is related to Objection 7, and to my mind much more serious than the
trivialities of syntax.
Background Problems
"A second reason is that few researchers are interested in doing the hard work
of checking proofs". I do not believe for a moment that anyone who thinks of
herself or himself as a research mathematician is going to spend any
significant time "checking proofs", especially according to rules that are not
common mathematical practice. Even if the rules were commonly accepted, that
is clerk work, not for humans. The result is that the researchers involved in
QED will primarily be computer folk who will concentrate on the software that
makes checking proofs easier (which is commendable), and then inevitably (I
believe that some of the discussion already exhibits this tendency) on
suitable restrictions on the mathematics to make producing the software easier
(which is deplorable).
1. Reliability. This is also Objection 1.
2, 3, 4. Logic. This is answered by the use of multiple logics and
appropriate interconnection theories. The research necessary here is to
extend the work of [Nelson-Oppen] to more complicated theories with more
complicated interactions.
5. Syntax. This is a design requirement, not a problem. It is both
technically feasible (using cooperative grammars) and essential.
6. Human. This is Objection 2 also.
7. Extensibility. Why do these plateaus occur? They also occur in
mathematical theory, and in most sciences also, and these endeavors are much
more complex than the scope of any of the theorem provers in question. I
believe that the plateaus are evidence for our earlier claim that no formal
system can encompass all aspects of a real system. In so far as the
correspondence between the formal system and the real system is close (that
is, the formal system is a good model of the real system), it will take time
to perform the analyses, discover the relationships in the model, and design
the experiments that define the boundaries of that correspondence. That time
is part of the plateau, as is the time spent exploring the boundaries,
discovering new anomalous relationships, and inventing new models. The
plateaus end in science and mathematics because humans are great at stepping
outside systems to look at them from different viewpoints; we do not know how
that is done.
8. Infrastructure. QED will need tools that work with objects within the
system, not just with objects in the application domain.
9. Rigor. This seems like Objection 2 also.
10. Standards. The problem mentioned is the result of naive standardization
instead of explicit management of variability. It should not matter what
syntax is used to express induction schemas, as long as the requisite
information is there. Also, by the way, there are many different induction
schemas that interact in different ways with other inference rules, according
to the amount of information that is included with them. It is essential for
the utility of QED to allow the full range of variability here, and to have
explicit models for the interactions.
11. User Interface. Ease of use for whom? I would be glad if proving a
theorem in QED were no harder than proving a theorem using pen and paper.
However, while there are long-standing traditions about many mathematical
objects and notations for them, there are few or no traditions of any kind for
notations that describe manipulations of expressions in those more common
notations. There is a basic difference of feeling between doing something
(e.g., moving around in a mathematical space) and telling an idiot savant how
and where to move. As a species, we have been moving around far longer than
we have been telling others how to move. This difference between actions and
words for actions may be insurmountable.
Root Logic
Starting with a choice of root logic before we know what we are going to need
is self-defeating. All of the discussion of a "root logic" is unproductive
until much more of the support infrastructure is in place. Which logic is not
the first question; that is like deciding what programming language to use
before you know what the program is supposed to do.
The most important part of the description here is the last paragraph, but it
appears that its message is ignored elsewhere in the manifesto.
Recommendations
The first two planning steps are fairly interesting. There is an almost
reasonable list of items to be considered for inclusion, but it is much too
superficial. It only contains "object-like" elements, confuses the software
items with the "application domain" items, misses completely the notion of
methods and processes in both domains. It is only superficially that the
importance of mathematical theory is in the theorems. Actually, it is in the
methods, which are in one sense just like the theorems, except that they have
much more complicated parameter objects (which can include methods as well).
One hint of this difficulty is in the cases in which essentially the very same
proof can be used to prove two different theorems in two different parts of
mathematics; different objects, different axioms, and yet the properties of
those objects are sufficiently similar that the same proof can be used.
Other things missing are primarily software infrastructure items, such as the
integration agents that make interconnections between parts of the system (it
is clear, I hope, that this will be a very widely distributed system), the
registration agents that will maintain knowledge of certain information or
services (it is clear also, I hope, that this will be a very dynamic system),
the "middlemen" that will direct requests for information or services to
appropriate providers, and a host of other software support structures, such
as self-monitoring agents that watch for trouble in various parts of the
system. My main point here on the software structure is simple: QED will need
more software infrastructure than you will believe at first, and perhaps more
than you will ever believe.
The third planning step is separable from all the others, and would be greatly
beneficial to a number of projects.
The fourth step is another sort of mire. Many theorems have several different
statements, all somewhat related. There should be a careful collection of
several versions of useful theorems, and their relationship should be
indicated. Moreover, there are some expositions of mathematics in which
results are proven in different orders; one takes a known theorem as a
definition and proves a result otherwise treated as an axiom. The inferential
structure of the subject can be traveled in many directions. Sometimes these
different views of the foundations of a subject area are very helpful for
understanding, and this style of analysis should be available in QED.
Comments on Discussion
The discussion has mainly been in a few categories:
choices of language, logic, implementation (db, ui)
the effect of Bourbaki
priorities of mathematical subjects
little theories, and other information organizations
existing theorem provers and information sharing
existing logics and the question of which root logic to use
I will start with some common incorrect assumptions that I think occur in the
manifesto and in the comments. Because of the length of this note already
(and I only thought I was making a _few_ comments!), I will save my comments
on the discussion for later.
Most discussants are missing the notion of context of statements in a formal
language (Beeson is an exception here). Instead, most of the commenters are
trying to make the statements all self-contained, and that is only possible by
overly restricting the range of allowable interpretations. Mathematical
notations depend on context for the interpretation of the symbols, both syntax
and semantics, and it seems to me that actually much more information is in
the context of an expression than is in the particular expression.
Digression: "lingua franca" is a great term; referring to one universal
language within another, though they were universal at different times and for
different purposes.
Most discussants are missing the notion of how much software infrastructure
will be needed in such a system (Boyer is an exception here). Instead, most
of the commenters are concentrating on the superficial (as far as the
programming is concerned) questions of application domain structure, root
logic, and algorithms for processing the application domain objects.
Most discussants seem to be assuming (mistakenly in my view) that mathematics
is in the theorems, not the processes and relationships among concepts. Until
those are handled explicitly and properly, QED will have little or nothing to
do with the practice of mathematics, despite its declared subject matter.
Most discussants are still assuming that rigor means everything is reducible
to the same level of detail in the foundations. I do not think that is true.
Each object and each process will exist at many different levels of detail
simultaneously, and inferences at some levels will automatically convert to
inferences at other levels. There is no reason to translate everything into a
single level. Besides, I do not believe that will be possible. In so far as
we can translate everything into one formal system, that formal system
explicitly limits what we can write and how we can process it, and I think
that historically, mathematics has shown a great inability to be so
constrained.
References
[Aczel88]
Peter Aczel,
Non-well-founded Sets,
CSLI Lecture Notes No. 14,
Center for the Study of Language and Information,
Stanford U., U. Chicago Press (1988)
[Barwise89]
Jon Barwise,
The Situation in Logic,
CSLI Lecture Notes No. 17,
Center for the Study of Language and Information,
Stanford U., U. Chicago Press (1989)
[Barwise-Etchemendy87]
Jon Barwise, John Etchemendy,
The Liar: An Essay on Truth and Circularity,
Oxford U. Press (1987)
[Barwise-Moss91]
Jon Barwise, Larry Moss,
``Hypersets'',
Mathematical Intelligencer, Vol. 13, No. 4, pp. 31-41 (1991)
[Chaitin93]
Gregory Chaitin,
Information-Theoretic Completeness,
Lecture Notes, UCLA
[Landauer-Bellman92]
Christopher Landauer, Kirstie L. Bellman,
``Integrated Simulation Environments'' (invited paper),
Proceedings of DARPA Variable-Resolution Modeling Conference,
5-6 May 1992, Herndon, Virginia,
Conference Proceedings CF-103-DARPA, published by RAND (March 1993)
[Landauer-Bellman93]
Christopher Landauer, Kirstie L. Bellman,
``The Role of Self-Referential Logics in a Software Architecture
Using Wrappings'',
Proceedings of ISS '93: the 3rd Irvine Software Symposium,
30 April 1993, U.C. Irvine, California (1993)
[Nelson-Oppen]
Greg Nelson, Derek C. Oppen,
``Simplification by Cooperating Decision Procedures'',
ACM Trans. Programming Languages and Systems, v. 1(2),
pp. 245-257 (October 1979)
From qed-owner Tue May 4 09:57:49 1993
Received: by antares.mcs.anl.gov id AA01130
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 4 May 1993 09:37:50 -0500
Received: from cli.com by antares.mcs.anl.gov with SMTP id AA01117
(5.65c/IDA-1.4.4 for ); Tue, 4 May 1993 09:37:44 -0500
Received: by CLI.COM (4.1/1); Tue, 4 May 93 09:31:07 CDT
Date: Tue, 4 May 93 09:37:25 CDT
From: Robert S. Boyer
Message-Id: <9305041437.AA28366@axiom.CLI.COM>
Received: by axiom.CLI.COM (4.1/CLI-1.2)
id AA28366; Tue, 4 May 93 09:37:25 CDT
To: cal@aero.org
Cc: qed@mcs.anl.gov
Subject: QED Manifesto - Notes and Comments
Reply-To: boyer@CLI.COM
Sender: qed-owner
Precedence: bulk
I enjoyed your remarks about the manifesto and the commentary. I
agree with some of what you say, and disagree with some. Here are
some disagreements.
1. On discovery. Your first `non flame' remark seems to accuse the
manifesto of trying to formalize discovery. But I don't read it that
way. Far from it. You say that otherwise QED is a pointless
exercise. Here we seem to disagree radically. I think that if QED
simply helps with, say, program verification, it will be far from
pointless. I believe it will help there. I've done a lot of program
verification, and I can tell you I'd just love to be able to tap into
a formalized edifice that treats Fourier transforms and other parts of
engineering mathematics. I would say that the formalization of
discovery is extremely hard, much harder than the QED project. The
difficulty stems from the fact that discovery can be done in so many
disparate, even mysterious ways. Poincare's description of suddenly
seeing a complete solution to some ultra hard problem, after not
having thought about it for months, just as he stepped on a trolley,
makes me think that mechanizing discovery is a long way off.
2. On the increasing complexity of high technology. From the
conversations I have had, I believe the increasing size of the
internal documentation of chips is strictly a function of the
increasing number of transistors one can fit in a square inch. It is
not caused by stupidity or ineptness, as you seem to imply.
Semiconductor companies do whatever they think will work to reduce
costs, and use imagination and abstraction a good deal. When they
know how to do better, they will, or someone else will.
3. On how much the foundations of mathematics are likely to change.
We certainly disagree on the status of the foundations of mathematics!
I would say that first order logic and set theory are here to stay in
this sense: whatever follows, and surely there will be some
improvements, will retain essentially all of the familiar notions,
theorems, and rules of inference that we currently have. And PRA will
continue to be adequate, in principle, for proof theory. Euclid's
geometry and Newton's calculus only had to be discovered once. So too
set theory and the primitive recursive functions. I believe the real
foundations for mathematics have been mainly found, once for all, and
finally, about 1900-1930. Time will tell. To the extent that
self-referring objects are worth studying but not yet understood, I
suspect they can be accommodated within existing mathematical theories
or minor extensions thereof. For example, Scott showed how to give a
model for the lambda calculus, in which functions can (appear to) take
themselves as arguments. Undoubtedly mathematics will improve greatly
in the future, but I personally don't expect the basic foundations to
change any more than I expect plane geometry to change.
4. On the social process. I do not agree at all that "it is
well-known that what constitutes a proof is at least partly a social
phenomenon". Unless you are playing on two different meanings of the
word "proof". There is the "proof" that people, with their
meat-brains do, and some of those "proofs" are simply false or
fraudulent, every bit as open to mistakes and even corruption as
banking! And then there is the Hilbert-Goedel notion of proof
(sequence of wffs such that each is an axiom or follows from previous
members ...). This notion of proof is no more social than the
sequence of integers! Do you also think that a "social process"
determines whether 27 follows 26 in the sequence of natural numbers?
Whether something is a proof, in this second, exact sense, is just as
much a matter of mathematics as 1, 2, 3.
Do you think that there is anything in principle that would prevent
the entering of Hilbert's proof of the Prime Ideal Theorem or the
recent computer-aided proof of the four color theorem into existing
proof checking systems? I don't. I suspect that whatever form of
induction Hilbert used is now well understood and well formalized in
ZF. The curious part of the four color proof was its use of long
computer runs, but, again in principle, program verification is up to
handling this. There is nothing theoretically difficult, or even new,
about using a calculation in a proof. (Note to everyone: please
notice the several uses of "in principle" above before accusing me of
the ridiculous claim that anyone could, in practice, get the four
color conjecture checked by an existing system any time soon!)
5. On irrelevant mathematics. As for objection 3 of the manifesto,
it may be that there is lots of irrelevant mathematics, but it is, I
think, politically dumb to put something like that into an organizing
document. It is hard enough in this society even to talk about
anything being important. We seem to be getting to the moral state
that all things are equal, including plus and minus one. While I
personally can imagine standing up in public and saying that "the
fundamental theorem of calculus" is IMPORTANT, I'm not sure I would
ever see the point of standing up, in public, and saying "the paper by
Smith and Jones in the last issue of the Proceedings of the Society
for Write Only Papers" is irrelevant and doesn't deserve checking. As
soon as you start asserting that lots of mathematics is irrelevant,
someone will start asking you to name names.
6. Objections 4 and 5 may be dumb, but they are made, and I think all
common objections should be treated in the manifesto. The more the
better. In one paper, de Bruijn remarks that the two standard initial
objections to Automath were (a) Goedel proved you cannot do that and
(b) Peano already did that.
7. On formalizing mathematical notations. You seem to think that
formalizing existing mathematical notations is a lot harder than I do.
Perhaps we have had radically different exposures to notations. I was
personally convinced in 1967, from working with A. P. Morse's book "A
Theory of Sets" (who carefully introduces loads of standard
notations), that the formalization of any mathematics notation can
probably be done in an acceptable way in a reasonable amount of time.
The most damage I have seen in formalizing notations used in practice
by skilled mathematicians is that when formalizing some things (a) you
have to insist that the bound variables be explicitly identified and
mentioned and (b) a few extra marks must be added here and there to
distinguish one defined concept from another. A good case study is
the O(n) notation, and some nice remarks on the many various things
this can mean can be found in "Concrete Mathematics". I would be
happy to discuss a particular problematic notation or two of your
choice with you.
8. On multiplicities.
It has been seen repeatedly in our studies of computational support for
complex systems that there is no one method, approach, notation, language or
even paradigm that can be adequate, so that multiplicities must be allowed
even at the lowest levels of processing [Landauer-Bellman92].
I feel you have given up much too early! There is so much evidence
that the beautiful edifice of mathematics can be formalized in, say,
PRA and developed in, say, ZF! Not that PRA or ZF are the only
vehicles or even the right vehicles. The problem here, I think, is
exactly opposite to what you think. The problem is that there are TOO
MANY EQUIVALENT, ULTRAPOWERFUL FORMALISMS! We are like children in a
candy shop, the heaven of infinities of equivalent formalisms of
fantastic power.
9. On getting `approval' for the major defined concepts.
One may usefully choose a preferred definition (for expository
purposes, for example), but the alternatives are as valid and also
important for exploring the relationships among the different
concepts in the subfield.
I agree, and it will certainly be important in QED to prove the
equivalence of various definitions. But I believe it will be
psychologically important to get a panel of the best experts to agree
that at least one of the definitions of each important mathematical
concept is "right". Same for the statement of at least one version of
each important, named theorem. This will give some people entering
things into QED that build on the work of others confidence that when
they invoke, say, the fundamental theorem of calculus, they are
employing the common notion of the reals, continuous function,
integral, and derivative.
10. who decides what goes into QED?
>From the earliest, I think that general consensus has and out to have
been that this should be a really open business, as open as we can
make it. For example, we have this remark of inclusiveness from de
Bruijn back in the 70's:
The idea is to make a language such that everything we
write in it is interpretable as correct mathematics ...
This may include the writing of a vast mathematical
encyclopedia, to which everybody (either a human or a
machine) may contribute what he likes. The idea of a kind
of formalized encyclopedia was already conceived and
partly carried out by Peano around 1900, but that was still
far from what we might call automatically readable.
11. The ongoing denegration of the meta discussion of root logic.
Starting with a choice of root logic before we know what we are going to need
is self-defeating.
PRA and its equivalents has been in use in proof theory for about 60
years, about as long as the discussion of ZF vs.
von-Neumann-Bernays-Goedel set theory. The questions `which root
logic', `which strong logics', `which syntax', `which theorems' will
probably all go on in parallel, as they have for many decades. The
discussion of LF vs. FS0 vs. PRA vs. ?? is not improved, not even
dampened, by assertions that it is premature.
12. Methods more important than theorems.
It is only superficially that the importance of mathematical theory
is in the theorems. Actually, it is in the methods ...
I can agree with that intuitively, and I think that is the way to
think if you want to pursue mathematical discovery via AI. But QED is
less ambitious than AI. When math journals contain formal statements
and proofs of "methods", then they can be added to QED. The success
of programs like Macsyma is rooted in the success at formalizing
methods for doing some kinds of symbolic manipulations. Maybe some
day someone will reduce, say, the method of `forcing', to something as
routine as differentiation, but I don't think it has happened yet and
so it would be extremely hard to add something like that to a system
like QED.
Bob
From qed-owner Tue May 4 20:35:31 1993
Received: by antares.mcs.anl.gov id AA15855
(5.65c/IDA-1.4.4 for qed-outgoing); Tue, 4 May 1993 20:27:23 -0500
Received: from swan.cl.cam.ac.uk by antares.mcs.anl.gov with SMTP id AA15847
(5.65c/IDA-1.4.4 for ); Tue, 4 May 1993 20:27:16 -0500
Received: from auk.cl.cam.ac.uk (user jrh (rfc931)) by swan.cl.cam.ac.uk
with SMTP (PP-6.5) to cl; Wed, 5 May 1993 02:26:59 +0100
To: QED
Subject: Formalizing notations and context
Date: Wed, 05 May 93 02:26:53 +0100
From: John Harrison
Message-Id: <"swan.cl.cam.:104680:930505012709"@cl.cam.ac.uk>
Sender: qed-owner
Precedence: bulk
I don't think it's so much formalizing mathematical *notations* that's the
problem. I agree with Bob Boyer -- my experience with HOL suggests that
embedding in lambda-calculus gives a very nice treatment of disparate uses of
bound variables (but some sloppy stuff like "the polynomial f(x)" has to go!)
Rather, I see the principal difficulty as being the large amount of informal or
semi-formal context which is traditionally associated with notations. For
example, flicking through the first few pages of Matsumura: "Commutative ring
theory" we find:
-1
"If f:A->B is a ring homomorphism and J is an ideal of B, then f (J) is an
ideal of A and we denote this by A /\ J; if A is a subring of B and f is the
inclusion map then this is the same as the usual set-theoretic notion of
intersection. In general this is not true, but confusion does not arise."
"When we say that R has characteristic p, or write char R = p, we always mean
that p > 0 is a prime number".
"In definitions and theorems about rings, it may sometimes happen that the
condition A =/= 0 is omitted even when it is actually necessary."
If one hopes to make mathematicians used to ignoring lowbrow details like this
use QED, then considerable powers of persuasion are called for. If it can't be
done, then some very difficult AI problems have to be faced!
John Harrison (jrh@cl.cam.ac.uk).