Opinion: Man, Machine and Math- Can we automate reason?


Jakobe Bussey

In the modern world, terms like machine learning, artificial intelligence and intelligent systems have become ubiquitous, and rightly so. There has never been a time in human history that we’ve had the raw power to fully utilize neural networks so seamlessly, and the results have been astonishing. True, this is far away from artificial general intelligence, the holy grail of artificial intelligence research, but the tricks our cutting-edge systems can do are worth noting. 

Indeed, the question for this century does not appear to be “what can artificial intelligence do?” It seems closer to “what can’t artificial intelligence do?” The limits we place upon this technology seem to collapse just as quickly as we put them up. 

There was once a time where one could guess that a computer could never play a game like chess at a high enough level to stand a chance against a grandmaster. The raw human emotion interspersed during a chess match seemed simply too daunting to replicate. Yet that idea was toppled last century  at the hands of Deep Blue, the chess playing super computer that defeated Gary Kasparov, one of the greatest chess players in history. Even the arts, the bastion of human expression and the inner reaches of the soul, have been under barrage by the fascinating advances in language processing, prediction models like GPT-3 and not to mention that an AI-generated painting sold for over $400,000 in 2018. With the humanities in its crosshairs, it seems almost comical to assume that mathematics, the most numerical and computationally extensive field, would be safe from the onslaught of AI… but perhaps not.

The conversation surrounding whether one could automate mathematics and create a program that can execute mathematical reasoning on a creative level has been hotly debated in mathematical circles for quite some time. But, to understand the depth of their debate, one must make the distinction between “doing math” and the central function of a mathematician, constructing proofs. 

A mathematical proof is a methodical stream of logic that argues for the verification of a certain “conjecture.” Once this proof is validated, it becomes a theorem (like The Pythagorean Theorem or The Fundamental Theorem of Calculus), which can be used by other mathematicians to make more proofs. In some respects, mathematical proofs are analogous to the theories generated in other scientific fields. However, unlike the other scientific fields, mathematical proofs require a great deal more precision in their creation, as they are more intimately linked with logic in a way that most other scientific disciplines are not. A mathematician cannot simply assume that a property holds even if he has witnessed a million instances where it has. He must rigorously prove the property from the logic itself. It is that logical reasoning that undergirds the writing of proofs, but could we possibly implement this high-level reasoning into a computer?

This is not a trivial question, even taking into account the trouncing AI has given to other fields. Paul Cohen, a late mathematical logician and Fields Medalist, espouses a view that is most comparable to a mathematical philosophy called “Formalism.” One of the central elements of this perspective is that ultimately mathematics is just string and symbol manipulation and thus easily applicable to computation. However, the Platonist view that seems to grip most mathematicians today stands in stark contrast with this opinion. The Platonist perspective provides mathematics with a more ethereal quality, stating that mathematics has eternal and unchanging properties; This makes mathematical proof more of an exercise in discovery as opposed to simply shifting different symbols around on paper. This doesn’t necessarily eliminate the plausibility of machines writing truly exceptional proofs, but it certainly doesn’t seamlessly support the case for automation like the Formalist view. 

Now is advanced statistics running on a high-end supercomputer really all that is required to prove (or disprove) the Riemann Hypothesis? Probably not, but the world of technology is constantly evolving and, if machine learning engineers further incorporated logical systems into their algorithms, there could be a time in the future that we could prove exceptional things with our computer companions. How far or close we are to that time remains to be seen.