
Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Letters
Letters
Sources of Disagreement • Zizek Re-Enunciated • Popper Pops Up • Attention Grabbing • The Beatles Can Get Back • Hope Spreads • Advances via Technology?
Sources of Disagreement
Dear Editor: I read with interest Sina Mirzaye Shirkoohi’s essay in Issue 169 on objectivity in science, basically arguing that it’s a chimera by citing Kuhn’s paradigmatic model of scientific revolutions. What’s often left out of this discussion is the role of philosophy in science, which scientists prefer to ignore and philosophers tend to exaggerate.
On YouTube you’ll find some of the brightest physicists and philosophers promoting their pet theories in contrast to what the status quo believes, unconstrained by the rigors of academically permitted paradigms. But their theories are often determined by philosophical considerations, such as: Do we live in a multiverse? Will AI become conscious? Is there quantum gravity? None of these questions can be answered by empirical means given our current limitations, which of course may change. I like the quote (source unknown), “Only future generations can tell us how ignorant the current generation is.” I also like a metaphor attributed to the scientist John Wheeler: “We live on an island of knowledge, surrounded by a sea of ignorance. As the island grows so does the shore of our ignorance.” Staying with that metaphor, I see the island as science, with philosophy out to sea, but the shoreline is where philosophy and science meet and contribute to each other. All scientific theories have limitations, no exceptions. This is why Wheeler’s metaphor is so apt. The sea of ignorance is infinite, and the perceived limit of what we can know is like a horizon that retreats while we approach it.
This all infers that there’s a growth in knowledge over time. Using the example of Galileo, as Shirkoohi does, with hindsight, and based on what we’ve learned using technology unavailable at the time, we can say with the utmost confidence that his position was correct, and the Earth does go round the Sun. Given all the various philosophical positions among those toiling on the shoreline of our collective ignorance, and the fact that 95% of the universe is unknown, I think we’re ripe for another Kuhnian revolution in the field of physics.
Paul P. Mealing, Melbourne
Dear Editor: I was disappointed to see no reference to Noam Chomsky, the most cited living academic, in your ‘Sources of Knowledge’ issue (169). Chomsky is a firm ‘mysterian’ when it comes to scientific knowledge: there are problems we might realistically hope to solve, and mysteries that we will not. His What Kind of Creatures Are We? (2016) defends this position magisterially. Chomsky points out that we have both cognitive scope and cognitive limits. While supporting attempts to reach these limits, he believes that many problems (such as free will, the origin and use of language, and consciousness) will probably ever remain beyond our understanding. He also makes the point that science has given up trying to provide a complete understanding of the universe: it now simply constructs theories of certain physical phenomena – while living reasonably happily with the fact that the theories don’t always tie together (as with relativity and quantum mechanics).
Chomsky quotes David Hume’s observation that Newton “seemed to draw the veil from some of the mysteries of nature” but had shown “the imperfections of the mechanical philosophy”, and “thereby restored nature’s ultimate secrets to that obscurity, in which they ever did and ever will remain.” Despite the progress of scientific knowledge, Chomsky believes that this idea remains essentially true, quoting numerous scientists and philosophers in support. Chomsky has also drawn on the philosopher C.S. Peirce’s point that humans are evolutionarily endowed with the power of forming theories. But Pierce also stressed the boundary of ‘admissible hypotheses’ that was necessary for ‘imagining correct theories’. Our theory-making ability comes therefore with frustrating but necessary cognitive restraints. Chomsky thinks that we might not be able to frame some questions about the universe. And if presented with the answers by, say, a superior alien intelligence, we might not even understand them.
Dr Terence Denman, Totnes
Dear Editor: Referring to Michael McGranahan’s interesting article on knowledge (PN 169), I find it almost impossible to believe that Gödel, in his profound and remarkable work, could have found any source of inspiration in that vacuous old chestnut the liar paradox, often given in the form ‘this statement is false’. Let’s call this statement L. Michael implies that L is ‘unprovable’ – but surely this is only in a trivial way, in the sense that L does not provide us with anything to prove. By contrast, there may be hundreds of important Gödel-unprovable mathematical conjectures (eg, twin primes, Goldbach conjecture, Euler’s number…). Clearly it would be useful to know which ones, and why.
Consider the sentences:
(i) ‘87 is a prime number’; this statement is false.
(ii) This statement contains five words.
In both cases, the phrase ‘this statement’ has a point of reference: it ‘points’ to something – ‘87 is a prime number’, and statement (ii) itself, sentence (i) being an external reference, and sentence (ii) an internal one. Like (ii), the sentence L refers solely to itself, and in doing so points to nothing substantive. So how could it be true? How could it be false? Yet it solicits the knee-jerk logic that runs, ‘If L is true, L is false; and if L is false, L is true’. This does not, to me, indicate unprovability, but merely a misuse of the words ‘true’ and ‘false’ – for, unprovability, if it is to signify anything, must confer a sense of existence, of something that can be disproved.
It’s difficult to see how the liar paradox could be regarded as anything but a linguistic frivolity.
Paul Tissier, Brighton College
Dear Editor: In an otherwise entertaining article in Issue 169, Michael McGranahan invites us to assume that the universe is “a consistent system, in the Gödel sense”, and hence that “the universe itself is an incomplete system.” But the universe is not a formal system [of language], nor anything remotely like a formal system. Accordingly, the question whether the universe is consistent or complete has literally no meaning in the Gödelian sense. Even ordinary mathematics is not a formal system – this being one of the things Gödel proved.
The celebrated Swedish logician and computer scientist Torkel Franzén once made a study of misapplications of Gödel’s work (Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse), and more instances can be found in Alan Sokal and Jean Bricmont’s book Fashionable Nonsense. The key feature their examples have in common is that they’re not formal systems, since, as Franzén says, they do not have a formal language; a set of axioms in that language; a set of formal inference rules; or a set of theorems that can be shown to be computably enumerable.
Now the universe may be consistent in some wider sense: for example, that its governing laws respect the Principle of Non-Contradiction. As to whether the universe is incomplete more broadly construed, insofar as human knowledge is concerned, the answer is a resounding ‘yes’. But outside the context of formal systems, incompleteness does not follow from consistency, since the words then do not have their Gödelian meanings, and so Gödel’s theorems do not apply.
There’s no law against invoking Gödel metaphorically or as a rhetorical flourish; but this may leave the reader with the impression that something has been proved when it definitely has not. This is ironic considering that Gödel’s theorems directly concern the question of what can and cannot be proved.
Tim Wilkinson, Houghton-le-Spring
Zizek Re-Enunciated
Dear Editor: In Issue 168, the reference to enunciated and enunciation by Slavoj Žižek in his article ‘Welcome to the Civilization of the Liar’s Paradox’, is reminiscent of Jacques Derrida’s signified and signifier. As soon as I read Žižek’s distinction between enunciated and enunciation, it made me recall what I had learned in an undergraduate critical theory course that was heavy on Derrida’s theories. If, as Žižek says, that enunciated is the content of what is being said and enunciation is what is “implied by what you are saying,” can it be true that something called ‘stop’ can either be a red octagon or a signal whose light is red? According to Derrida’s signified and signifier, meaning is never stable, while Žižek seems to say something can be both true and untrue. Schrödinger’s Cat, anyone?
Paulette Halili, Irvine, CA
Popper Pops Up
Dear Editor: As Brian King lucidly explained in Issue 169, Sir Karl Popper provided brilliant insight into the logic of scientific discovery through his idea of falsification being the basis of science. But Popper’s attempts to describe a methodology for putting that idea into practice were only a qualified success. Not a failure, as his account often works well in applied science – it proved very helpful to my own research in machine dynamics, for example – but it’s oversimplified for more fundamental questions about the history of science. However, his admiring critic Imre Lakatos developed an account of ‘Scientific Research Programmes’ (SRPs) that combines the essence of Popper’s logic with a much better explanation of the historical record, and unlike the approach of Lakatos’s contemporary Thomas Kuhn, is prescriptive as well as descriptive. And whereas Popper only granted falsifiable statements a role in scientific practice as a source of hypotheses and as a tool for methodology, on Lakatos’s account, an SRP combines a ‘core’ of ideas with a ‘protective band’ of auxiliary hypotheses connecting the core to the world of experimental results. The protective band is incrementally refined and added to over time in response to unexpected results, so the theory as a whole at any given moment is falsifiable, as Popper demanded; but the normal response to a failed prediction is not instant abandoning of the core idea, but evolution of the auxiliary links. Eventually the entire SRP may become ‘degenerate’ and be superseded by a more ‘progressive’ rival SRP, though there may be a long period of overlap, and the criteria for ‘changing horses’ are not clear cut. (Any mechanical engineer will tell you that despite the success of relativity and quantum theory, Newton’s SRP is alive and well, albeit hedged around with auxiliary hypotheses regarding its field of application.)
Boyle’s Law provides a simple illustration. I learnt at school that it only applied to ‘perfect gases’ – which were defined as those that obeyed Boyle’s Law… clearly not a falsifiable statement. However, combine it with a bunch of empirical statements of the form ‘gas X is a perfect gas to accuracy Y under conditions Z’, and you have a useful model for capturing real-world behaviour (even though it has been superseded by the Kinetic Theory of Gases as an SRP).
It’s a shame that Kuhn’s theory of Scientific Revolutions, with its sensationalist scepticism, is the response to Popper that gets all the publicity. Whether due to Lakatos’s youthful Marxism or his tragically early death before he could properly pull his ideas together, his contribution has never gained the attention it deserves.
Roger S. Haines, Ealing
Dear Editor: Reading my Issue 169, I encountered an interesting article by Brian King entitled ‘Popper, Science & Democracy’. While the article talks about how a theory is determined to be true, the topic took me to another place. I have been reading about man’s development. We find that historically, man has separated himself from all other beings: we had a conversation with ourselves and determined that we are the be all and end all of creation. Is this supposition correct? Or, if treated like a theory, are there alternative theories?
In fact, through scientific investigation it has now been determined that the ideas of what makes man different from other beings have been dismantled. We find that tool-making, burying the dead, creation of meaningful symbols, and a belief in an afterlife, may all be characteristic of non-human beings too. Given this information, we should dispense with the notion that man is the sole beneficiary of God’s grace. But now we have new information, will the old idea be pushed aside? No: emotional attachment outweighs facts. So theory falsification comes hard in life, and paradigm shifts even harder.
Clarence G. Underwood, Esparto, California
Attention Grabbing
Dear Editor: Paul Doolan’s article on attention in Issue 169 caught my attention – partly because of its wide-ranging informativeness; partly because in my Master’s thesis, attention (or attending) was a key idea; and partly because, despite my enjoyment of the article, I felt there was an unnecessary gloom about it.
The gloom derives from Doolan’s use of Herbert Simon’s assertion that “in an information rich world, the wealth of information means a dearth of … the attention of its recipients.” My feeling is that this is typical of that mindset of the economics in which everything is portrayed as a zero-sum game. Yet attending should not be thought of as an isolated activity; rather, it is but one part of an interdependent, self-referential series of activities which Benjamin Bloom and others identify as “receiving (sometimes called ‘attending’), responding, valuing, organising and characterising”, which, in their Taxonomy of Educational Objectives in the Affective Domain they present as a ‘scaffold’. I think it’s a more dynamic activity, in which our minds repeatedly cycle through those stages, creating provisional hypotheses that change how we attend, and, in turn, change the other stages, until we settle upon some conceptualisation that we can, for the moment, accept.
Our richness of information enables a wider, more nuanced appreciation. So rather than the wealth of information reducing our attention, it is stimulating more selective attention. We’re in an age of greater creativity than has ever been.
Alasdair Macdonald, Glasgow
The Beatles Can Get Back
Dear Editor: I was disappointed, but unfortunately not surprised, when I took a first glance at my latest issue of Philosophy Now (169). In keeping with recent trends, we’re encouraged to consider the profound philosophical contribution of The Beatles. This follows articles on other philosophical giants like Taylor Swift and David Bowie. Does Philosophy Now wish to be considered a serious publication? Or is it more important – necessary even – to seek wide appeal? Will there be future articles on Ken Dodd’s hit ‘Happiness’ as both a profound statement on Bentham’s Utilitarianism and a rejection of Schopenhauer’s pessimism? How much further is this going to take us?
Andrew Lewis, Caerphilly Borough
Hope Spreads
Dear Editor: I was delighted by Elise Beal’s article in PN 168 on the similarities between the Japanese philosophy ikigai and the online trend #hopecore. As a school student, I sometimes feel as if I’m watching myself and my peers slowly decaying: it was reassuring to me that someone else believes that, through the smog of social media, our generation can find meaning through small snippets of happiness.
Small pleasures can bring hope for short periods of time, which is good; but I believe we’d all benefit if we were torn away from our screens by bigger pleasures. By this I mean projects of our own choosing – say, writing a story or painting a landscape. I imagine Simone de Beauvoir meant such endeavours as things that will bring us ‘transcendence’. Once one has overstepped the hurdle of ‘immanence’ (defined by de Beauvoir as the comfortable loop of same-olds – the comfortable loop of mind-numbing videos, for instance) and painted a decent picture, or written a good story, one feels satisfied, and will consider doing it again instead of switching the glowing screen on again. So my argument is that ikigai and hopecore are key to the long-term wellbeing of our generation, as Beal so brilliantly stated – but they’re only a gateway to bigger pleasures, satisfying projects, and a lifetime of meaningfulness.
Ramona Middleham, Devon
Dear Editor: Recently I finished re-reading the novel The Poisonwood Bible by Barbara Kingsolver while enjoying a Cohiba in my favorite cigar lounge. This profound tale reminded me of the suffering of people as a result of war, famine, and disease in equatorial Africa and elsewhere. With half a cigar left, I turned to continue my reading with the short essay by Elise Mayumi Beal in Philosophy Now 168, ‘Young & Meaningful’. I was heartened by the work of this young writer, and reminded of a discovery I made long ago as a young adult into the writings of the psychologist Viktor Frankl, a Jew who survived concentration camps and who discovered the immense healing power of meaning. His insights helped develop a new paradigm, the ‘Will to Meaning’.
Thank you for publishing Beal’s essay and for reinforcing my hope for the world.
Tim Strutz, Harrison Township, Michigan
Advances via Technology?
Dear Editor: I do not agree with the author of ‘Studying Smarter with AI?’ in Issue 168. I would have died for AI in my Civil Engineering studies in the old millenium as I use it now for stuff like reading this very article – I used AI to translate it to my mother language, which I read with much more clarity. And then I questioned AI about it. Had a little conversation.
Taking notes in math lectures in the 1980s took all my energy just to get it down on the paper. I had absolutely no time to reflect. And the info was given in a ‘one size fits all’ style. If ChatGPT, Copilot or whatever had been around I would have questioned and got clarified, in the way I now use AI after reading philosophical texts, to get the best possible understanding of a new concept. No two students of anything understand new concepts in exactly the same way.
AI should definitely be a part of every university student’s toolbox – not to get nice little answers, but to understand stuff in a way that suits their specific brains. I really hope contemporary students use AI to that end.
Anders Wallin, Uppsala, Sweden