×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Beyond the Laws of Nature

Russell Berg gives us a brief philosophical tour of the history of scientific theorising.

The origins of the idea of laws of nature lie with the ancient Greek pre-Socratic philosophers (from the sixth century BC). These philosophers rejected traditional explanations of events in terms of the capriciousness of the gods, to instead explain natural phenomena in terms of natural causes. Some assumed that there were patterns in nature which, once gleaned, could be used to make predictions. Eclipses were predicted, for instance. However, pattern analysis gave way to runaway reductionism: Thales claimed that all is water; Anaximenes claimed that everything is made of air; and the Epicurean Atomist school said that there were just atoms falling in the void. And not one of these philosophers thought that their claims about nature should be tested.

Socrates (469-399 BC) doubted that these natural philosophers had produced anything of value. Socrates believed that such knowledge was worthless compared to ethics, or how to live well, the worthy study for a philosopher. Plato (429-347 BC), his pupil, was even more disparaging about the physical world, which was but a shadowy reflection of the pure world of abstract Forms, from where we get all true knowledge. It took Aristotle (384-322 BC) to steer philosophy back towards nature. He wrote the Physics, and coined the term ‘biology’. However, although Aristotle relied on observation as well as logic, like his predecessors he did not pursue experimentation. The pinnacle of ancient science occurred when Archimedes (287-212 BC) first used a complete logico-deductive method, by testing his ideas. His scientific method was able to discern patterns in mechanics: he had rudimentary theories of dynamics (movement physics) and hydraulics (water physics).

Hundreds of years later, neo-Platonists encapsulated laws of nature within their philosophical schemes. Because of this, the idea of discovering laws of nature became associated with paganism. This was very detrimental to the cause of natural philosophy during the conflict between Christianity and paganism, and in the fourth and fifth centuries AD science paid a heavy price for its association with paganism, culminating in the (second) destruction of the library in Alexandria, and in the edicts of the Emperor Theodosius which banned natural philosophy throughout the Roman Empire. Eating the fruit from the Tree of Knowledge was clearly prohibited by the early Christian fathers. Only much later did Thomas Aquinas (1225-1274 AD) claim that it was acceptable to nibble at the apple, provided reason remained the handmaid of faith. He employed Aristotelian philosophy to provide rational arguments for belief in God, for example. Aristotle’s natural philosophy (ie, his science) crept in along with the rest of his philosophy, once it had been argued by theologians that he posed no threat to the authority of the Church.

After Aristotle

The first serious challenge to Aristotelian physics was made much earlier, by John Philoponus – a sixth century AD Alexandrian philosopher and theologian. Aristotle had said that an arrow continues to move when it is released from the bow because it is pushed by the air behind it, but Philoponus showed that idea was erroneous by agitating the air behind a stone, which did not move. Philoponus postulated instead that an object moved when a force is imparted to it, and continued moving until the force was exhausted. However, like many Alexandrian theologians of the sixth and seventh centuries, he held theological views which were considered heretical by the Roman Catholic Church, which meant that Philoponus’s work was ignored for around eight hundred years.

During the fourteenth century some theologians and scholars, mainly in France then England, were also not happy with Aristotelian physics, which also held that objects fall to the ground because they seek their natural place. These theologians believed a more sophisticated physics would yield greater glory to God. Hence they turned to natural philosophy to discover the laws of nature ordained by God to govern the material world, just as God’s moral laws as revealed in the Bible govern our behaviour.

The first fourteenth century scholar to move away from Aristotle was Jean Buridan, who was working in the Faculty of Arts at the University of Paris. Not joining the more prestigious Faculty of Theology gave Buridan the freedom to use a larger range of sources from which to build his secular philosophy. Buridan decided to develop Philoponus’s work on dynamics. He named the force imparted to a body to enable it to move the impetus force, and stated that once a body had received this impetus it would continue moving at uniform speed in a straight line, unless interfered with. Buridan’s work predates Newton’s First Law of Motion by three hundred years. Also, Buridan took a mathematical approach, claiming that the impetus equals the mass times the velocity – thus making impetus similar to the concept of momentum in modern physics.

Buridan’s theory of impetus was taken up by a school of natural philosophers and theologians at Oxford University known as the ‘Oxford Calculators’. The first of these was Richard Kilvington (1302-1361), who used the mathematical approach from impetus theory to develop his theory of local motion. Thomas Bradwardine extended Kilvington’s work into a mathematical approach to mechanics (laws of motion). Bradwardine was a theologian as well as a mathematician and natural philosopher, and again, he believed that his work concerning the laws of nature would make known the glory of God. He was lucky enough to live at a time where there was no conflict between theology and natural philosophy.

With the help of colleagues, Bradwardine developed the ‘mean speed theory’, linking velocity with acceleration. This theory was later used by Galileo in his ‘law of falling bodies’. However Bradwardine was unfortunate in that he lacked the mathematical tools to develop an accurate mathematical approach to motion. Hence the best theory he could conceive was the ‘ratio theory’, in which he states that an arithmetic increase in velocity corresponds to a geometric increase in the original ratio between force and resistance. This was accepted as a law of nature for about a hundred years, despite known problems in its accuracy.

The ratio theory was only overturned (by Newtonian mechanics) after Isaac Newton (1642-1727) developed calculus. Newton’s mathematical and experimental approach led to his laws of motion, his mathematical approach to gravity, and his corpuscular theory of light. But soon after Newton formulated his laws of nature, David Hume (1711-1776) undermined the whole concept of laws of nature by throwing doubt on the principle of cause and effect, and by claiming that the method of induction used by scientists relied purely on subjective habit rather than on any objective pattern in the world (Induction is the method of forming general laws of nature by observing examples: for example, ‘all swans are white’ from seeing lots of white swans).

The problem with Hume’s critique is that the fact that the mind constructs patterns to model the outside world does not rule out patterns existing objectively in the outside world. And if there are external, objective patterns in the world, induction has some validity. Similarly, making a link between causes and effects as a habit of mind does not exclude there being actual causal links in the outside world. Hume’s approach fails as it does not assess the probability of there being objective linkage against the probability of there being no linkage. Only if the likelihood of there being no causal links is greater than that of these links existing does ‘cause and effect’ equate to magic as Hume suggests. The natural philosophers after Hume just ignored him (as they do most philosophers), and continued to postulate laws of nature without taking into account Hume’s arguments about the limitations of the process of induction.

Immanuel Kant (1724-1804) had his own objection to the validity of apparent laws of nature. He pointed out that knowledge is dependent upon perception, so we can never know the world as it is in itself, independent of our particularily human manner of perceiving things. Kant states that we can only perceive appearances (known as phenomena), but these phenomena only exist in us, whilst the independent reality of things (which he calls the noumena) can only be grasped by reason; and reason can say nothing about what things are like in themselves, independent of us. Kant’s denial of any possible knowledge of the nature of the connection between noumena and phenomena, that is, between reality and appearance, means this thinking was restricted to speculative epistemology, with little impact on the philosophy of science.

From Mendel to Modernity

The eighteenth and nineteenth centuries were the golden age for formulating laws of nature. For example, in chemistry, laws of gases, laws of partial pressures, Charles’ and Boyle’s Laws were developed; in physics, the laws of thermodynamics, and Ohms’ Law, among others; and in biology, Mendel’s laws of inheritance. The Church largely approved of all this law-making, as the prevailing view was that God’s laws governing nature were open to discovery by the scientific method, and that discovering them would reveal more of His character.

In fact, the error of believing absolutely in any given laws of nature comes from overconfidence and denying the ‘faith’ element in science. For instance, by 1880 many physicists believed that physics had been virtually sewn up, with just a few anomalies still to be resolved, such as black-body radiation. Yet within thirty years, the laws of physics had to be completely rewritten because of these very anomalies.

As science moved away from certainty (which is a realistic direction to go in when induction is used) science and religion moved further apart. The terminology also changed. At the turn of the twentieth century, philosophers of science were talking about verifying theories, and distinguishing between strong and weak theories – which means, they were either directly or only indirectly verifiable.

In the middle of the twentieth century this method of determining the validity and strength of a theory by verification was challenged by Karl Popper [see this issue for his ‘Brief Life’]. Popper stated that theories cannot be proved by accumulating supporting examples or evidence. Rather, Popper claimed that theories can be disproved by finding contrary evidence. This would lead to scientists formulating, then testing, new theories. He termed this approach falsification.

Popper used the proposition ‘all swans are white’ as an example: however many white swans you find, this does not prove the proposition, since to prove it you would need to examine all the swans in the universe, which is impossible. (The lesser challenge would be to examine all the swans on this planet. Unfortunately, this is also impossible, as we could never know the colour of all the swans that have lived and died.) Instead of trying to do the impossible, we should devise tests to disprove our hypothesis – in this case meaning we should search for a swan which isn’t white.

If no imaginable measurements or observations could even in principle falsify a particular theory, then according to Popper, the theory loses its claim to be scientific. If it can’t be checked, it isn’t part of science. Examples Popper gives are Freudian psychoanalysis and Marxist historicism. These theories can adapt to any sort of evidence, and therefore can never be checked. By contrast, if attempts at falsification are made and a falsifiable theory holds up, the theory becomes stronger. This is analogous to Nietzsche’s saying, “What does not kill me makes me stronger.” For theories, this becomes “What does not disprove me makes me stronger.”

‘Normal Science’

Through verification and falsification, science strives for truth without ever reaching it, because it’s working within the limitations of the inductive method. Unlike logical deduction, induction cannot provide certainty, because there might always be some piece of falsifying evidence we haven’t discovered yet.

Another way of thinking about induction as a method of describing reality is by comparing a scientific theory to an algebraic or geometric pattern. The question becomes; how accurate is the scientific pattern at describing reality? For example, the elasticity of many metals shows a linear relationship between stress and strain for small weights (Hooke’s modulus); but this relationships but does not hold for larger weights, when a stretching wire may deform and snap. Thus this linear ‘pattern’ has limits. Limitations also exist for Newton’s laws of mass and acceleration. These Newtonian laws do not hold for very high speeds (approaching the speed of light) or very large masses (e.g. black holes). There, Newton’s patterns break down, and Einstein’s patterns take over. In fact, Newton’s patterns are a special case of Einstein’s patterns.

This leads us to Thomas Kuhn’s approach to scientific progress. What Kuhn (1922-1996) calls doing ‘normal science’, is solving the puzzles produced by the prevailing scientific way of looking at the world, by applying the established patterns we know as scientific theories (up to the end of the nineteenth century we termed them ‘laws’).

Kuhn refers to the most fundamental patterns of science thinking, including its methods, as ‘paradigms’. However, he confuses the issue by using the term ‘paradigm’ in two different ways, as he admits in a postscript to his 1962 book The Structure of Scientific Revolutions. As well using it to mean concrete puzzle-solutions, that is, basic scientific theories, such as the Darwinist paradigm or the Copernican paradigm, he also uses the term to cover the constellation of beliefs, values and techniques shared by members of a given scientific community.

Sticking to the idea of paradigm as a fundamental ‘inductively-produced pattern’, there come points when the pattern fails: no solution can be found for a problematic scientific puzzle using the rules for that type of scientific thinking. These points of failure of the theory, where the theory cannot explain some observed phenomenon in its terms, are referred to by Kuhn as anomalies. The accumulation of anomalies can result in the breakdown of ‘normal’ puzzle-solving scientific activity, as the established pattern fails to predict the obtained experimental results. Now the opportunity arises for alternative patterns of scientific thinking (ie, alternative paradigms) to compete with the existing theory. According to Kuhn, if a rival paradigm gains sufficient support within the relevant community, it can overturn the existing paradigm, in other words, become established as a new scientific way of looking at the world. One worry about Kuhn’s approach is that he claims that the success of a paradigm is related to the needs of the community rather than to any yardstick of objectivity.

How To Think With A BAT

So where does this leave scientific theory? We have seen that science depends upon there being perceivable patterns in nature. Induction can allow us to discern these patterns to a certain extent, but can never provide certainty. As science reveals more complicated patterns and develops more sophisticated models with better predictive power, it comes closer to describing the objective truth. As Kuhn argued, there is competition between paradigms or basic inductive patterns. I think this competition should be judged using the yardstick of objective truth rather than the particularly needs or whims of the community. Rather than claiming that any given theory demonstrates incontrovertible Laws of Nature, with the implied certainty, all we can say about competing theories is that there will be one which is currently the Best Available Theory (BAT), since it is closest to taking into account all the evidence available. Perhaps science will never reach demonstrated objective truth, and enquiry will only cease when the competition between theories stops due to the end of humanity.

© Russell Berg 2012

Russell Berg is currently working as a food microbiologist.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X