*Publisher’s Note: Kenneth Eaton’s essay was one of two essays that won the Benjamin Franklin Finkel Mathematics Essay Contest. The Northern Review congratulates Kenneth on this honor.*

Mathematics is a language of logic. Equations take the place of words and axiomatic structures hold its theories together. Just like any other language, the ideas governing math had to be created over time. This is done with proofs linking known axioms together to demonstrate a new theorem is also true. Each new theorem then becomes a new base that can be used to make further proofs, allowing the foundation of mathematics to continuously grow. While the field of mathematics may seem largely defined, beyond the Pythagorean theorem and trigonometric identities lie numerous unsolved problems. One of these unsolved problems is the P versus NP problem, which asks whether any problem whose solution can be verified quickly can also be solved quickly. Despite many solutions being proposed, there is yet to be a formally accepted answer.

Though modern attempts to solve these problems can still feature human ingenuity, powerful computers offer new approaches. With the rise of computers in the 1950s, automated theorem provers began to be developed to find these proofs. The technology started slowly but increasing interest in artificial intelligence (AI) and continued efforts towards improvement have made it an exciting area of growth for the near future.

While theorem provers are based on computer science, both areas have a fundamental grounding in mathematics. The use of computers to solve mathematical proofs gained attention in 1956, when Allen Newell and Herbert Simon revealed an AI program called the Logic Theorist. It was created to solve general logic problems found in the book *Principia Mathematica*. This three volume work showed how areas of mathematics, including set theory, cardinal numbers, and real numbers, could be reduced to symbolic logic proofs thus providing a number of proofs a theorem prover could work to verify. The successor Newell and Simon created to the Logic Theorist, the General Problem Solver, approached proofs based on a human thought process. The General Problem Solver uses a heuristic approach and means-ends analysis to reason how a solution to a problem could be found based on pieces of known information. Essentially, the system starts with a set of knowns and a goal and uses what is known to take gradual steps toward the goal until the goal is actually realized. While the Logic Theorist was certainly limited in its capabilities, it provided a starting point for later work on theorem provers.

Many modern prover programs have followed with the goal of proving mathematical theorems. One notable example, from a prover called EQP, was a proof of the Herbert Robbins equation for the basis of Boolean algebra. Boolean algebra is a branch of mathematics based around binary truth values (true or false), which are commonly represented as 1 and 0 rather than a full set of numbers. Boolean algebras also base their statements off of the logic operators “and” as well as “or,” unlike elementary algebras that rely on addition and multiplication.

Robbins claimed in 1933 that a simplified version of the Huntington equation could be used as a foundation for Boolean algebra. It was not clear at the time whether any algebra following Robbins’ equation had to be a Boolean algebra. The EQP prover was able to achieve this result after eight days of continuous runtime using Boolean conditions formulated by Steven Winker. Winker interestingly worked with another theorem prover at Argonne National Laboratory, called OTTER, in order to assist his research. With the help of OTTER, Winker was able to develop the two Boolean conditions, which if proven to be true would also prove that any Robbins algebra must be a Boolean algebra.

Finally, in 1996, EQP was able to prove both Winker conditions; sixty-three years after Robbins made his claim, it was verified. The “thought” process that the EQP prover was based on made use of associative-commutative unification. The theory for this relies on the associativity and commutativity axioms, which allow mathematical terms in a statement to be regrouped and reordered respectively. By doing this, one statement is shown to equal another, which in theorem proving can be used to take previously proven statements and assert a new statement to also be true. The development of associative-commutative unification and its use by EQP to prove Robbins equation was a significant achievement for the field of theorem provers but left concerns regarding the accuracy of the proofs so constructed.

With continued efforts, the field of automatic theorem provers has grown, resolving ever-new concerns. In order to verify their proofs, one approach taken is to have a separate program check the proofs with output closer to English language statements so that people can easily follow the verification. Furthermore, modern computational languages and supercomputers allow the calculations to be done faster than ever before. The mathematical theory behind many current provers is first order predicate calculus, which allows many types of problems to be neatly expressed in logic statements. For example, a statement such as “if it rains, I will wear a coat” can be broken down into a relationship between it raining and wearing the coat, where each of these is seen as a separate condition. Given one condition (it is raining) the other is known to be true (I am wearing a coat). By formulating mathematical axioms and problems in a similar manner, a theorem prover is able to arrive at conclusions, such as a new theorem based on old theorems and axioms that it knows. The future of this technology lies in constructing proofs with what is known and then turning around and using these new results to solve even more problems.