A new approach to understanding why logic works and its applications

A new approach to understanding why logic works and its applications

“I think what you have done in your dissertation is quite interesting and makes a real contribution to the program of understanding what we mean by logical consequence.”
Jon Barwise

My dissertation project developed a framework for understanding why logic works, and applied that framework to create new techniques for building models of logical consequence that are more effective for modeling the meaning of feature structures (an abstraction related to object-oriented expressions). The published version of this work may be found here.
 
The original problem was to create an effective technique for modeling the consequence relation for object-oriented data. What does that mean? Consequence is the relation that tells you, for any statement x in some language, what other statements in the language you know, if you know x.  More formally, for some language L, for statements P and Q in L, Q is a consequence of P just in case if P is true, then Q must be true. Consequence is an aspect of semantics, the practice of modeling the meaning of expressions in languages. One could say it is the central concept in logic.
 
Working out a mathematically rigorous way of modeling consequence for sentences of first order logic was a fundamental advance in the field of logic. Having a mathematical model of consequence that was independent of syntactic deductive rules enabled the properties of deductive systems to be proven. These properties included soundness (do the proof rules always generate valid conclusions) and completeness (does a set of axioms and proof rules generate all valid conclusions).
 
The original work was done by Alfred Tarski, a Polish mathematician. He published his key results in 1936[1]. Tarski developed a mathematical model of consequence that consisted of several parts. He began with the language for which the consequence relation was to be modeled. The statements in that language were sentences in first order logic. In the original application the sentences were statements from number theory. Tarski supplemented the set of statements with a way of representing the “facts” the statements were about, and a recursive definition of truth for the statements that defined which statements were true in which sets of facts. He then defined a relationship in terms of that compound structure (sentences, facts, truth) which he claimed both generated and explained the consequence relation for the language.
 
Tarski’s approach revolutionized logic by making semantics mathematical. His approach also had important practical implications. In 1970, E.F. Codd of IBM Research defined a new approach to database systems, i.e. relational database. This new approach can be traced to concepts which were created by Tarski. (The connection was not direct, that is, Codd did not reference Tarski, but it can be traced[2]). The clean mathematical semantics of relational database enabled programmers to say “what” they wanted instead of “how” to compute it (as was the case with so called “hierarchical database” then the dominant database style). While the initial relational databases were slower than their hierarchical competitors, the mathematical model made layers of implementation and generations of optimizations possible, such that relational database eventually took over the database world. Software based on mathematical models of meaning can generate disruptive value in the world.
 
Tarski’s method of modeling consequence was important and valuable. But it is not the most effective way to model consequence for object oriented data (technical term: “feature structures”). That is, instead of answering whether sentence Q is a consequence of sentence P, the relation to be modeled in this case is consequence between objects. If I know object P, do I also know object Q? for any pair of objects in the language.
 
The shift from sentences to objects is important in the modern digital world, since so much of our knowledge is now represented not as sentences but as objects (including collections of linked objects as are found on the World Wide Web or in the Amazon catalog). A fundamental problem to solve is how to be able to transform information from the forms in which it is made available into the forms in which it is useful to someone in some context. Objects are useful not only in modeling the information “out there” but also in modeling the point of view of a subject with a need. The structure of the question yields the structure of the answer.
 
While it is possible to use Tarski’s model theoretic approach to model consequence for a language of objects, the approach is not ideal. First, the model-theoretic approach demands a formal separation between statements and “facts” (called “models”), yet with object-oriented systems the statements (objects) are the facts. Second, Tarski’s model-theoretic approach demands a centralized structural component (the “facts”) which may need to be changed every time a new type of object is added. Changing the structure of the “facts” may in turn cause changes in the definitions of truth. The “facts” in the model-theoretic approach act like “global data” for the whole system, with the resulting challenges for scaling the model. Each new type of object makes the central “facts” more complicated. It’s why in integration exercises having a universal metamodel into which all types of models are mapped breaks down in time. Third, every kind of object has its own way of adding meaning to the objects which are placed within it. For example, a calendar adds meaning to events, or a map adds meaning (of a different kind) to named buildings. But past techniques to modeling context using model theoretic semantics took an ad hoc approach.
 
I approached the problem by applying the “inventor’s paradox.” To solve the problem, I solved a more general question out of which my desired result fell out as a consequence. All of this was only possible with the guidance and example of my advisers: John Etchemendy, Jon Barwise, Johan van Benthem, and John Perry. I started with the results of my primary dissertation adviser, John Etchemendy.[3] To summarize an important work in too few words, Etchemendy showed that while Tarski’s approach to model-theoretic semantics worked, it did not work for the reasons Tarski thought it did. Etchemendy presented an alternative explanation of Tarski’s technique, called “representational semantics.”[4]

I took Etchemendy’s alternative explanation and generalized it, creating a core concept which I call the “representational schema.” This schema gives a general form for techniques for creating mathematical models of logical consequence. I then used that general schema to create a new family of techniques for creating models of logical consequence: the order-consistency approach. The first version of this approach used a concept found in the work of the mathematician Lindenbaum.

The order-consistency approach has certain technical advantages over model-theoretic semantics for constructing models of logical consequence for object-oriented systems. The order-consistency approach has no formal separation between statements and models, nor does it have a centralized structural component like the models in model-theoretic semantics. In order-consistency semantics, the statements are the models. The definition of consequence in the new approach can be distributed. For example, a model of consequence for object oriented data can be defined by giving specifications on a type by type basis with no need for a separate unifying metamodel. Further, the new approach gives a way to directly model the varied effects of context in a general way. In the dissertation, I constructed 5 different techniques for building models of logical consequence (3 of the model-theoretic kind and 2 of the order-consistency kind), and presented proofs showing the relationships between them. I was able to prove that any language whose consequence relation could be modeled with a model-theoretic approach could be modeled with an order-consistency approach and vice versa.

The work showed “why logic works,” gave a pattern for creating new techniques for building models of consequence, and developed ways of comparing the relative power of different modeling techniques. It showed how to understand logic as a tool we use for the construction of knowledge, and how design choices in the construction of the semantic model lead to different properties of the resulting system. A new set of techniques, ones more appropriate for modeling the meaning of object-oriented data, were created and analyzed.

My adviser Jon Barwise was a world-famous logician, a founder of the Center for the Study of Language and Information at Stanford, a professor of Mathematics, Philosophy, and Computer Science, and editor of the Handbook of Mathematical Logic. He wrote: “I think what you have done in your dissertation is quite interesting and makes a real contribution to the program of understanding what we mean by logical consequence.”

I believe that the consequences of the work have yet to be fully realized. Just as Tarski’s model-theoretic semantics led to Codd’s relational database, I believe that order-consistency semantics may be the foundation of a future software system. In particular, I believe it will have value in building models of subjectivity… what it is to be a situated agent with a point of view, and goals relative to that point of view.

[1] Alfred Tarski, “On the concept of logical consequence,“ included in Logic, semantics, metamathematics, Hackett Publishing Company, Indianapolis, 1983, 2nd ed. edited with an introduction by John Corcoran., pp. 409–20.
[2] Solomon Feferman, “Tarski’s influence on computer science,” LICS 2005. https://www.academia.edu/160385/Tarskis_influence_on_computer_science
[3] John Etchemendy, The Concept of Logical Consequence, Harvard University Press, 1990. Reissued by CSLI Publications and Cambridge University Press, 1999.
[4] Historical note: Etchemendy’s work was in a direct teacher-to-student line from Alfred Tarski himself. Alfred Tarski was adviser to Solomon Feferman, who was head of the mathematics department at Stanford. Feferman was also a student of Kurt Gödel at Princeton, and was a primary interpreter of Gödel’s work. Feferman was adviser to Jon Barwise and John Etchemendy, and Jon Barwise was adviser to John Etchemendy.
 

Gödel, Incomplete

Gödel, Incomplete

A Fashion Riddle: Tokyo Lost & Found

A Fashion Riddle: Tokyo Lost & Found