Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
One Law to Rule Them All
Tim Wilkinson tries to chart our quest for consistency without contradicting himself.
Consistency doesn’t guarantee truth, but as ancient Indian and Greek philosophers realised, it helps. Socrates, both in person and in his incarnation as the main character in many of Plato’s dialogues, was famous for his trademark method of posing questions to his interlocutors in order to tease out contradictions in their thinking; but it was in one of Plato’s students – Aristotle – that consistency found its champion.
Aristotle & Barbara
Aristotle (384-322 BC) identified a number of rules of reasoning he termed syllogisms, which were later given charming names by medieval philosophers. It would take too long to describe them all, but here’s an example called Barbara:
Premise 1: All mammals are vertebrates.
Premise 2: All cats are mammals.
Conclusion: Therefore all cats are vertebrates.
Barbara has nothing to do with the taxonomic classification of living things. If we replace the first premise with ‘all mammals are aeroplanes’ the conclusion would be that all cats are aeroplanes. This is not true, but only because the new premise is false; there’s nothing wrong with the logic of the argument.
Underpinning Aristotle’s logic was the Principle (sometimes Law) of Non-Contradiction – two contradictory statements cannot simultaneously be true – and the similar, but rather more contentious, Law of the Excluded Middle – two contradictory statements cannot simultaneously be false. In Aristotle’s own words: “It is impossible for the same property to belong and not to belong at the same time to the same thing and in the same respect” (Metaphysics, IV). To Aristotle, the principle of non-contradiction (PNC) was not only self-evident, it was the foundation of all other self-evident truths, since without it we wouldn’t be able to demarcate one idea from another, or in fact positively assert anything about anything – making rational discourse impossible.
My favourite justification of PNC’s special status in logic comes from the philosopher and polymath Avicenna (c. 980-1037), who had this to say about PNC sceptics in his own Metaphysics: “As for the obstinate, he must be plunged into fire, since fire and non-fire are identical. Let him be beaten, since suffering and not suffering are the same. Let him be deprived of food and drink, since eating and drinking are identical to abstaining.”
Non-Contradiction in Practice
Could an omnipotent God create a stone too heavy for Him to lift?
The famous ‘Paradox of the Stone’ asks whether God could create a stone so heavy that He couldn’t lift it. If God is all-powerful, then He should be able to do anything; but either He cannot create such a stone, or else having created it, He cannot lift it; either way we seem to have discovered something He cannot do, so He is not all-powerful. One resolution of this paradox, favoured by René Descartes, is to say that God’s omnipotence gives Him such power that He can make a stone too heavy for Him to lift, and He can also lift it.
If you think Descartes’ answer seems a bit suspect, you’re in good company: over the years the majority of theologians and philosophers have preferred the solution provided by St Thomas Aquinas, who held that omnipotence cannot confer the power to do logically incoherent things, such as draw square circles, or make rocks too heavy for omnipotent beings to lift.
In the philosophy of science, Gottfried Wilhelm Leibniz (1646-1716) applied PNC to the laws of physics, and concluded that even God couldn’t create a world where nature contradicts itself. Today, experiments in quantum mechanics routinely produce completely different outcomes depending only on how measurements are made; but it is telling that no matter how baffling such observations become, there is no contradiction. So even quantum mechanics is only paradoxical in the sense of being counterintuitive. Our common sense may be offended, but Leibniz’s self-consistent universe survives intact.
For a full-blown contradiction in physics, consider time-travel. Relativistic time dilation, which facilitates travel to the future by slowing down time for the traveller, is well established by experiment, and introduces no danger of inconsistency. But travel to the past, or to the present from the future, opens the door to a number of nasty paradoxes – a classic example being that if you kill your past self at a time before you stepped into the time machine, you will not be alive to travel back in time and pull the trigger. Having dodged the bullet, you do survive to travel back in time and kill yourself… and so on.
Much ink has been expended analysing such situations, and since backward time-travel doesn’t necessarily result in contradictions, there may be some possible worlds in which it is achievable. Nevertheless, taking Leibniz and Aristotle as our guides, backward time-travel is inconsistent with self-aware creatures capable of freely interacting with their surroundings. If you believe the universe contains the latter, you cannot also believe in the travel backwards in time and remain consistent. (Here ‘free interaction’ presupposes no controversial philosophical notions such as strong versions of free will; it requires only that I could locate my past self and kill him.) Although solutions to the equations of general relativity that appear to allow travel to the past have been found (coincidentally, some of them were found by Kurt Gödel, of whom more later), in this case Aristotle trumps Einstein. Solutions to mathematical equations cannot be realised if they pave the way to internally-inconsistent configurations of reality. Consistency for time-travellers can be restored by placing absurd restrictions on free interaction, postulating parallel timelines or other sci-fi contrivances, but time-travel as commonly understood, namely visiting our own actual past and freely interacting with people who really are our own past selves or our ancestors, is out of the question. Stephen Hawking has gone as far as to propose a ‘Chronology Protection Conjecture’, to the effect that physics can’t allow travel to the past except in special cases incapable of generating paradoxes. Considering that Hawking has also recently written that philosophy is dead (see ‘Hawking Contra Philosophy’, Philosophy Now Issue 82), it’s refreshing to note that, like omnipotence, even physics must yield to logic.
Consistency in Ethics
How should we treat each other, and why? Many different ways of tackling this problem have been advanced. The different approaches sometimes result in agreement on what is good, and sometimes not, but within any given theory consistency is crucial. Consistency in ethics is perhaps most evident in the principle of the Golden Rule – the idea that we should treat others as we would consent to be treated in similar circumstances. The Golden Rule has to be wielded carefully since naïve application can easily lead to absurdities. Properly understood however, its consistency criterion is an incredibly powerful ethical tool, even though it doesn’t actually tell us anything about what is right or wrong; we have to work that out for ourselves by applying the rule in conjunction with other considerations, such as ideas of how people might like to be treated.
Consistency in Mathematics
Euclid by Raphael
Around 300 BC, Euclid of Alexandria wrote his Elements, one of the most influential works in the history of mathematics. The Elements is usually remembered for its rich geometry, but it also contains a proof of the fact that there are infinitely many prime numbers – a theorem which ranked third in a poll ran by the journal The Mathematical Intelligencer to discover the most beautiful results in mathematics. (Incredibly, the top two results, and three of the top five, were the work of the same mathematician, Leonhard Euler (1707-1783), but that’s another story.) The proof that there are infinitely many primes usually given today is not quite the one in the Elements, but the idea is the same. Roughly speaking, today’s proof involves assuming that there are only a finite number of primes, then considering what happens if they’re all multiplied together and one added to the result. Either this new number is prime, or if not, it must be divisible by a prime number not on the original list. Both outcomes contradict the original supposition that it is possible to produce a finite list of all the primes. So the original supposition that there are only a finite number of primes must therefore be incorrect.
Not only is this one of the most famous results in mathematics, it’s also an excellent illustration of the method of proof known as reductio ad absurdum (‘reduction to absurdity’), where one temporarily assumes the opposite of what one is trying to prove, then shows by rigorous deduction that this assumption leads to a contradiction. So important is the reductio method to mathematics that G. H. Hardy (1877-1947) was moved to write that reductio “is a far finer gambit than any chess play: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game” (from A Mathematician’s Apology) – that is, the validity of such proofs rests on the consistency of mathematics as a whole. We shall briefly return to Hardy later, but for now, notice that without PNC, not only would the proof about prime numbers not work, it wouldn’t even make sense to discuss the matter, since the finite and the non-finite (infinite) would be the same.
Danger of Explosion
In the centuries that followed Aristotle, statements including ‘and’, ‘or’ and ‘if… then’ were incorporated into logical theory, and by medieval times we had something recognisable as the precursor to what is today called propositional logic – this being the study of how true propositions can validly be combined to produce new ones. Upon developing propositional logic, medieval logicians noticed something interesting: if they allowed themselves just one contradiction, they seemed to be able to arrive at any conclusion whatever.
Writers on logic often refer to this notion that ‘anything follows from a falsehood’ but rarely explain why this is the case. Here’s a modern version of the medieval idea: suppose we would like to prove the proposition that ‘Bugs Bunny is an alien’. First, notice that if ‘A’ is any true statement, and ‘B’ is any other statement, whether true or false, then the combined statement ‘either A is true or B is true’ is true, because A is true. Second, if we know that ‘either A is true or B is true’ and we discover that A is false, then B must be true. These rules of propositional logic are known as disjunction introduction and disjunction elimination respectively. Suppose next that the Earth is flat, and also that it isn’t flat (a contradiction). Since the Earth is flat, the statement ‘Either the Earth is flat or Bugs Bunny is an alien’ is true, by disjunction introduction. But if ‘Either the Earth is flat or Bugs Bunny is an alien’ is true, since we also know the Earth is not flat, then Bugs must be an alien, by disjunction elimination. We can also prove Bugs is not an alien by a similar argument. Allowing a single contradiction thus results in logical Armageddon, where everything is true and everything is false – an idea that came to be called the principle of explosion or ex falso quodlibet, ‘anything follows from a falsehood’ (strictly, from a contradiction).
If it’s not possible for two contradictory statements to be simultaneously true, there’s no need to worry about a truth explosion. But the explosion seems entirely contrary to intuition anyway: the geometry of the Earth surely has no bearing on whether Elmer Fudd’s nemesis is of extraterrestrial origin, for instance. Indeed, if you don’t suffer from the handicap of having studied classical logic, you might say that to assert that anything follows from a falsehood is positively illogical, because it opens the door to fallacious arguments where the premises are totally irrelevant to the conclusion.
Consider the self-contradictory statement known as the Liar Paradox: ‘This statement is false’. On the face of it, the Liar Paradox appears to be false if it’s true, and true if it’s false. Perhaps then it’s both? If so, how do we avoid explosion?
In order to deal with this kind of problem, twentieth century philosophers developed so-called paraconsistent logics; for example, by forbidding use of the rules of inference that lead to the explosion, or by introducing relevancy conditions that prevent conclusions being drawn from irrelevant premises. It is important to stress that the development of paraconsistent logics has not led to a disintegration of the distinction between true and false. Quite the reverse: paraconsistent logics prevent harmless contradictions from resulting in logical explosion into areas where they are not relevant, and as such these logics are useful in circumstances where some philosophers think there is good reason to relax PNC slightly and regard certain special types of statement, such as the Liar Paradox, as being both true and false – a minority philosophical position known as dialetheism.
Impressive as Aristotle’s logic was, his syllogistic rules were insufficient to capture the arguments in Euclid’s Elements, let alone in the rest of mathematics, science and philosophy. Leibniz realised the inadequacy of Aristotelian and medieval logic, and began to construct a more comprehensive framework.
Leibniz’s project finally reached fruition two hundred years later with the development, chiefly by Gottlob Frege (1848-1925), of a symbolic notation for logic generally, of which an important special case is known as first-order logic.
First-order logic can be thought of as an enhanced version of propositional logic, expanded to include quantifiers such as ‘there exists’ and ‘for all’, and capable of creating complex symbolic statements which can be said to be true or false. In his 1929 doctoral thesis, Kurt Gödel (1906-1978) showed that when bundled together with a formal language and axioms (premises) in certain types of formal systems, first-order logic is complete in the technical sense of being sufficiently powerful to deduce all the logical consequences of the axioms. Of particular interest is the formal system known as Peano Arithmetic, named after the Italian mathematician Giuseppe Peano (1858-1932), which can be thought of as a formalisation of elementary arithmetic, and which we discuss a little below. But Gödel’s completeness theorem applies to many other important mathematical systems as well.
Consistency and Incompleteness
Gödel’s completeness theorem was impressive enough, but there was better to come, in the form of his incompleteness theorems, in which – confusingly – the meaning of the term ‘complete’ is quite different. The completeness theorem shows that in certain systems, all logical consequences of the system’s axioms can be deduced using first-order logic. Gödel’s incompleteness theorems talk about incompleteness in the sense that formal systems sometimes contain statements that cannot be proved, or disproved, from their axioms at all.
One important aspect of the incompleteness theorems that is often neglected is that they act as a bridge between consistency and completeness. The incompleteness theorems can be formulated in many ways, but to make the connection with consistency explicit, consider the following versions:
1.) Given a formal system F containing a certain amount of arithmetic, there exists a true sentence of F that is not provable in F, if and only if F is consistent.
2.) For any formal system F satisfying certain conditions, the consistency of F cannot be established withinF itself if and only if F is consistent.
Gödel’s incompleteness theorems are widely misunderstood, and their consistency conditions often overlooked. As a result, one frequently finds them being deployed well outside the confines of mathematical logic, where almost invariably the result is utter nonsense. For examples of the myriad ways in which Gödel’s theorems are misapplied, I recommend the reader to the wonderful book by Torkel Franzén (1950-2006), Gödel’s Theorem: An Incomplete Guide to its Use and Abuse (2005). One of Franzén’s targets is a claim one reads with depressing regularity, to the effect that all logical systems will generate propositions that they cannot prove, but which humans can see to be true. Maybe this is so, maybe not; but it doesn’t follow from Gödel’s theorems, because it ignores the consistency condition. To emphasise, Gödel showed that certain formal systems contain true statements they cannot prove if they are consistent. For the common claim to follow from the first incompleteness theorem, humans would need to be able to see, or better still prove, the consistency of any given system. But how do we know whether formal systems are consistent or not? Consider Peano Arithmetic: there are several proofs of the consistency of Peano Arithmetic that mathematicians find compelling, but such proofs are highly technical, and have to take place within some sort of framework (which cannot be Peano Arithmetic itself, in view of Gödel’s second incompleteness theorem), the consistency of which can itself be called into question… However, as an alternative to formal proof, it is relatively easy to convince oneself of the consistency of Peano Arithmetic by merely reflecting on its axioms and rules of inference.
While this approach does have some merit, it can lead to problems. Nobody expected to find a contradiction in set theory, until Bertrand Russell famously discovered the one that now bears his name (‘Is the set of all sets which are not members of themselves a member of itself?’). In the light of Russell’s Paradox, set theory had to be hastily patched up to banish contradictions; and to Frege’s dismay, Russell’s Paradox also demolished some of his work on logic and arithmetic.
Even if we take the consistency of Peano Arithmetic as beyond doubt, it is only one, very simple, system. There is no reason whatever for supposing that humans can know the consistency of every formal system no matter how complicated. Yet merely saying that humans know that formal systems can state truths they can’t prove if they are consistent amounts to no more than a repetition of the first incompleteness theorem, and not to a convincing demonstration that humans can always recognise truths that formal systems cannot prove.
Consistency and Minds
Following the work of the brilliant Alan Turing (1912-1954), it became clear that consistency, completeness, and other properties of formal (logical) systems are closely connected with the capabilities of computers. Since Gödel’s theorems say something about the limitations of formal systems, and hence of computers, perhaps this paves the way for them to say something significant about how computers compare to the human mind?
The philosopher John Lucas has written a number of fascinating papers exploring such ideas, starting with Minds, Machines and Gödel (Philosophy, XXXVI, 1961), and summarised in his book The Freedom of the Will (1970). Unfortunately, although Lucas has responded carefully to criticisms of his argument, he has not yet managed to produce a version that has convinced a majority of philosophers.
The mathematician and physicist Sir Roger Penrose has written several books expanding on Lucas’s theme, and has suggested that it might be better to apply Gödel’s theorems to the human mind indirectly. Gödel noted that his incompleteness theorems show that no fixed-axiom structure can completely codify all mathematical truths. Of course, no human can know all mathematical truths either – if only because we don’t live long enough – but neither do we seem to be constrained by a fixed set of axioms. In fact, most of the time human mathematicians don’t pay any attention to axioms at all, and when I know something, I certainly don’t seem to be merely manipulating axioms in a formal system. So it is unclear how far Gödel’s theorems apply to the workings of the human mind, even when the humans are doing mathematics.
These are deep and important philosophical waters, but regrettably it would take us too far from our discussion of consistency to navigate them further. Personally, I hope and expect we will one day show that the human mind does exceed computer logic in many important respects. Unfortunately, no-one has yet found a way to translate Gödel’s theorems into a slam-dunk demonstration that mind exceeds machine, despite the sterling efforts of Lucas and Penrose.
Consistency of Mathematics Revisited
Where does all this leave the consistency of mathematics? It is undeniable that we cannot prove the consistency of mathematics to everyone’s satisfaction; but to my mind, to attempt to do so is to put the cart before the horse. Consistent mathematics seems to be the most useful and interesting kind, so, armed with the PNC, that’s what mathematicians have been searching for these past 2,500 years. Hardy’s quotation about ‘offering the game’ almost makes proof by reductio sound risky, but in fact there’s no real danger in gambling the whole of mathematics on its own consistency, because pruning away the inconsistencies is one of the objectives of mathematicians in the first place. Is there a risk that if all the inconsistencies were to be removed, then nothing would be left? Only the same risk that one equals zero; and if that’s the case, then anything you can think of is true; and false; and neither; and both.
Far beneath the surface layers of mathematics, philosophy, and science, lies logical bedrock, where the word ‘Aristotle’ is carved into the stone. We live in the shadow of Aristotle and Gödel, striving for consistency, and believing reason will guide us to the most irrefutable truths we will ever know. As Mark Knopfler of Dire Straits sang: if two men say they’re Jesus, at least one of them must be wrong. You just can’t argue with logic like that.
© Dr Tim Wilkinson 2013
Tim Wilkinson used to teach mathematics at the University of Newcastle-upon-Tyne, and is now a writer in the North East of England.