Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
The Tears of Many Clowns • Original Thinking • Liberty, Equality, Technology • Can Robots Wrestle with Ethics? • Uncommon Sensations • Absurd Speculations? • Who Am I? • Atheism On Its Own Terms
The Tears of Many Clowns
Dear Editor: In his article in Issue 111, Mordechai Gordon, paraphrasing Freud, tells us that “people who suffer from mental illnesses such as depression and paranoia typically have a very stern super-ego and are, therefore, not able to recognize and appreciate humor.” Yet some of the greatest comedians – Tony Hancock, Kenneth Williams, Spike Milligan, Frankie Howerd and many others – suffered from depression. How does he explain that?
Ian Birchall, London
Dear Editor: As a female philosopher I would like to comment on Peter Adamson’s article in Issue 111 regarding the distinct scarcity of us within the philosophical arena. We are here, and we do have something to say – the issue appears to be the lack of interest in the topics we discuss rather than the lack of discussion itself. Raising humanity, which globally and historically has been the role of women, seems to be of little interest to male philosophers past and present – as if you can somehow discuss the human condition and the ideal society without taking into account the familial bonds which are the foundations of society. People are not born in a vacuum, they are born into a family, and the future of the individual depends to a large extent on the treatment (and conditioning) that they receive within that particular group. Having read many books by different philosophers, I am always struck by how little childhood is mentioned. Not only childhood itself, but the complex emotional relationships that exist within the family unit. Some philosophers may claim that this is the business of psychologists, but I disagree. If you want to discuss the meaning of life, then surely it is relevant to discuss the circumstances which that life is brought into. Obviously there are topics where this discussion would not be relevant (e.g. the nature of truth), but this does not undermine my argument that the lack of women philosophers throughout history is due to men finding women’s opinions of little interest. But someone considering an opinion irrelevant does not make it irrelevant per se. It does however make it very hard to get published.
This isn’t a rant against men. This is an observation that many men don’t find the female perspective interesting enough to be considered. Curiously, many women don’t find the male perspective interesting either – which could explain why there are so few female philosophers.
Liberty, Equality, Technology
Dear Editor: For me it was timely that PN’s issue on Liberty and Equality (Issue 110) came out when it did, because I was then travelling in France, where many of the battles to shape such concepts occurred. I felt lucky travelling in France because I had the resources to do so. What has this to do with liberty and equality? Well, if everyone were equal I don’t think we would have the liberty to travel as we do. Historically, states that focused on economic equality at the expense of liberty became authoritarian, thus restricting travel to a select few for fear of losing control. But as freedom has expanded around the world, equality has also grown. It seldom happens the other way around, if ever.
Whilst travelling in France I had a liberty that is often overlooked – the independence that technology provides. I had the liberty of taking photographs and seeing them instantly without having a professional develop them. I also had the liberty to extract information instantly from the internet without relying on anybody else. These liberties are available to almost everybody, thereby enhancing the equality amongst us. I don’t think those technological freedoms would have developed in societies where equality was the chief pursuit. In fact, these technological freedoms emerged in countries that cultivated liberty first – the liberty to experiment and exchange ideas freely.
David Airth, Toronto
Dear Editor: I enjoyed Sean Moran’s article ‘Surveillance Ethics’ in PN 110, especially as I was sitting in a train concourse whilst reading the article and counted six visible CCTV cameras. However, it struck me that CCTV cameras are not the greatest concern in a ‘surveillance society’. Indeed, anyone who has worked for a large government organisation knows that contracts almost always go to the lowest bidder, and thus it is often a challenge for government to ensure that its emails are working and ID cards are handed out, let alone managing to intelligently monitor countless hours of CCTV footage. Of course, such footage is available within a time limit should ever someone want to review it, but it often takes quite a remarkable event for this to happen. The reality is that although we are much recorded, we are seldom watched. So my main fear in a surveillance society is from my fellow citizens equipped with smartphone cameras and social media accounts. There are famous examples of a moment of indiscretion caught by a passer-by and uploaded to the internet that have destroyed careers and even lives. That such misfortune happens to celebrities is perhaps not too worrying, since celebrities make their money by living in the public eye; but the fact that a minor misdemeanour by a naïve teenager, or perhaps drunken stupidness from a student, can haunt them for many years to come is a significant worry. There is a large online community that takes pleasure in passing these clips around, passing judgment in a frankly evil way, and hiding behind a degree of anonymity to avoid the usual social consequences of persecuting others. No laws can prevent this happening once the clip has been uploaded and multiple copies distributed internationally. Some might say this serves the person right for misbehaving in public, and for adults they may have a point; but for children or teenagers?
Forget worrying about Big Brother, it’s our other ‘siblings’ who do the real harm!
Simon Kolstoe, Botley
Dear Editor: Sean Moran’s article on ‘Surveillance Ethics’ addresses an issue that is exhaustively (to my mind) reported on, both pro and con. But surveillance is very popular when, for example, a child kidnapping occurs and the discovery of their whereabouts is aided by surveillance cameras. Police having body-cams is also becoming popular, and is urged by both police and civilians. All sorts of truth comes to light, protecting both sides of the story. Wrong-doers being identified and apprehended is a good consequence. So until surveillance is in my house uninvited, I am all for it. The sophistication of data retrieval is something to ponder, and maybe to be concerned about; but with luck the new technology for our protection will mean that the watched watching the watchers become smarter and quicker.
Cheryl Anderson, Kenilworth, IL
Dear Editor: Francisco Meija Uribe’s article ‘The Paradox of Liberalism’ in Issue 110 concerns me in a number of ways. Firstly, he seeks to address the issue of fundamentalism without defining the term. Let me try to do so. A characteristic I observe in fundamentalists is taking as knowledge what others would take as only a belief. Hence, a religious fundamentalist knows that his or her God exists, whilst someone sharing the same religion in a non-fundamental manner only believes God to exist. The believer acknowledges some measure of doubt, whereas the fundamentalist admits to no doubt, and may even assert that they can prove their stance. This makes dialogue between a fundamentalist and a non-fundamentalist impossible, for the former will see the latter as being wrong, or at best, ill-informed. Hence Uribe’s hope that liberals can win the argument with fundamentalists is ill-founded: the fundamentalist will not accept that there are grounds for discussion. The only hope lies in trying to persuade those exposed to fundamentalist arguments but who have not yet adopted a fundamentalist position.
Uribe refers to John Stuart Mill’s dictum that we should be free to pursue our own agendas as long as our actions do not restrict the freedom of others. However, no society has ever successfully tackled the second part of this proposition other than by restricting the freedoms of those who transgress it. When fundamentalists restrict others’ freedoms, they claim they have the right to do so, since in their eyes their standpoint is completely justified. They will however become aggrieved when their own freedoms are restricted by non-fundamentalists.
My second concern is that Uribe repeatedly refers to ‘Western’ liberalism. I know followers of Buddhism and Islam – both often described as ‘non-Western’ religions – who are just as liberal and tolerant of the views of others as any Westerner. Claiming liberalism for the West is philosophical imperialism – an activity which can only increase hostility and strengthen the stance of the non-Western fundamentalist.
Michael Shaw, Huddersfield
Can Robots Wrestle with Ethics?
Dear Editor: As an old Expert System programmer, I read with great interest Robert Newman’s article entitled ‘Can A Robot Be Ethical?’ in Issue 110. His article raises some important issues, but he might wish to consider two additional arguments. Firstly, there is the possibility of programming computers to consider ethical decisions, as long as, like humans, they judge within a given ‘sphere of responsibility’. We do not expect a police constable to make judgments on launching missiles. Are we expecting computers to be omniscient, whereas humans are clearly not? Secondly, there is the issue of equity. Magna Carta gives expression to moral responsibility by guaranteeing judgment by one’s ‘peers’. Are the autonomous robotic weapons described by Mr Newman’s article our ‘peers’?
Andrew J. Lewis MSc FLS, Chelmsford, Essex
Dear Editor: In her excellent article in Issue 110 on Thomas Reid and the fading of the Scottish Enlightenment, Toni Vogel Carey pays tribute to Reid’s defence of common sense as a response to Humean scepticism. Common sense beliefs about the existence of physical objects and other minds, for example, have a robust credibility that sceptical sophistries cannot match. However, on the topic of perception, which is often the starting point for sceptical philosophy, it may be worth elaborating Reid’s thinking in more detail, to show the effectiveness of his arguments.
Scepticism about what we can conclude about the world as a result of our perception of it (as found in Descartes, Hume, Berkeley and many philosophers before and since) is generally based on the argument from illusion. A stick in a jar of water looks bent, but is in fact straight; a cold hand finds warm water hot, whilst a hot hand finds the same water cool; an apparent lurker in the bushes can turn out to be mere shadows. Examples such as these are used by sceptics to show that our senses are unreliable and therefore that we do not know that reality is as we perceive it to be.
Reid rebutted this argument very effectively. He asked, how do we know that our senses have failed us on any of the occasions cited? By further use of our senses, of course! The stick in the jar looks bent, but further use of our senses will show that it is in fact straight, and likewise with the other examples. We only know that our senses fail us at times because most of the time they do not. So mistaken perceptions do not provide a sound basis for a sceptical philosophy. (Two centuries after Reid, Gilbert Ryle expressed the same argument in terms of currency and counterfeits. You may encounter a counterfeit note, but it would be foolish to then wonder if the whole currency could be counterfeit. You cannot have a counterfeit without some genuine currency to contrast it with. Likewise, you cannot have sensory mistakes without genuine sensory experience to contrast them with.)
Unfortunately, however, Reid also thought that common sense validated his conventional religiosity. Reid accepted the argument from design, and believed that the existence of God was evident from all the examples of apparent design we see in nature. The existence of such things as parasites, predators and suffering were treated as complicating factors, not as evidence against a benevolent deity. The theory of evolution put paid to the argument from design. Unfortunately for Reid, he’d pinned much of his philosophy on design, so its collapse seriously damaged his standing among philosophers.
Hume has fared much better. His philosophy is held in high regard and even his most sceptical arguments are still found to be challenging, if not compelling. The contrast with Reid is not without its ironies, however. Hume often appeals to our common sense when trying to persuade. In his analysis of miracles, for example, a key point is that everyday experience tells us that people are prone to lying, exaggeration, errors and wishful thinking; so reports of miraculous events should always be treated with scepticism.
Perhaps if Hume had listened to Reid on perception, and Reid had listened to Hume on religion, the Scottish Enlightenment would not have faded as it did. Still, common sense tells us that Scotland is as good a place as any to have an Enlightenment, and that a second flowering of it is not out of the question.
Les Reid, Edinburgh
Dear Editor: I enjoyed reading Fred Leavitt’s ‘Dancing with Absurdity’ in Issue 110. However Leavitt failed to mention that we live in a mainly rationally-connected world, which gives our lives a sense of continuity. I am sure that even Leavitt lives his daily life in full expectation of this rational continuity. For example, when he parks his car and later returns to his vehicle, he expects to see the vehicle and not that it has been changed into a pumpkin. How else could Leavitt or anyone else plan their lives, if it was not for rational continuity? The reason that Leavitt can plan his daily life is that, contrary to what he says, he does assess probabilities without certainty. When he returns to his car it is most likely – a greater probability than any other outcome – that it will be intact. It is also far more likely that his vehicle would have been vandalised than it had shrunk and been placed in a matchbox.
Induction [generating scientific laws from observation] follows similar arguments, in that it claims that the future is more likely to follow the past unless there is reason to the contrary. Science is also underpinned by other principles, including Ockham’s Razor. Using these methods we can gain useful provisional insights into the world we inhabit, whilst accepting that certainty is beyond our grasp.
Russell Berg, Manchester
Dear Editor: Fred Leavitt invites us to spot the error in his stimulating article, in which he sets out a case for what I might call ‘super-radical scepticism’. Nothing is certain and all possibilities are equally likely, he concludes. I would agree that we can be 100% certain about very little. But Fred goes too far with the second bit. There are more probable explanations of animal droppings in the park than Fred’s unicorns, whilst no evidence will prove the non-existence of unicorns. Where I believe he trips up is in saying that without some certainty to rest on, probability cannot be meaningfully assessed. Quite the opposite: probability only has meaning in a world of uncertainty. If I run for a bus, I improve the probability that I will catch it, but I certainly can’t be certain that I will!
All possibilities are equally valid only in the sense that each fails to deliver certainty. But once we forego certainty we can weigh up competing theories – and have some great discussions about the best criteria to apply to this task.
Jon Cape, Stirling, Scotland
Dear Editor: I enjoyed Fred Leavitt’s article, but I believe there are two ways out of the impasse of radical skepticism. One was expressed by W.V.O. Quine in his Two Dogmas of Empiricism (1951) and the other by Ludwig Wittgenstein in On Certainty (1969).
Leavitt is sceptical about empirical approaches to what is knowable. On an individual basis one can agree. But Quine demonstrates that knowledge is a collective endeavour: not so much an individual epiphany but a cumulative, shared achievement over time. I do not decide personally that hedgehogs are prickly, but rely on collective knowledge to warn me off them – along with porcupines, gorse and cacti, which I know in advance will cause an unpleasant sensation if I touch them. Over time a body of knowledge accretes. It’s most secure at its centre, but less so at the edges. At the edge of this ‘fabric of knowledge’ is experience. Experience – empirical discovery – is the agent of new insights, and as the insights creep in from the edges, the entire fabric settles into a new configuration. This point of view (which, to be fair, Leavitt acknowledges) takes away the burden of establishing all knowledge for oneself and puts what’s knowable in terms of consensus. Another aspect of this thinking is to say that knowledge consists of ‘models’ of the world, and as with other types of model, they come and go. This takes the pressure off words like ‘truth’ and ‘knowledge’, which have become distressed by centuries of over-use.
Wittgenstein’s approach is to recognise the other face of truth, namely doubt. We deal with doubt every day. So what, we might ask, does doubt do for us? We can only have doubt where it is reasonable to suppose that somewhere else there is a relative certainty against which the doubt is counterposed: Wittgenstein writes, “A doubt which doubts everything is not a doubt.” This opens up an entirely novel arena: What it is that we take as true that enables us to have legitimate doubt? Without that truth, our doubt would simply be a metaphysical stance, not an investigative tool. Wittgenstein warns us in On Certainty against the search for ‘transcendental certainty’. It was his colleague, G.E. Moore, who worried about how he could ‘know’ that his hand was real, or how he could ‘know’ that he was looking at a tree. Wittgenstein solves that predicament by suggesting that we do take many things as being true, often without realizing it – expressed by Wittgenstein in his view that nobody bothers to check that his feet are on the end of his legs when he stands up from a sofa.
John Comer, Birmingham
Dear Editor: I think that most philosophers will find Fred Leavitt’s essay on radical skepticism unsatisfactory. He cites G.E. Moore’s argument that if a seemingly sound argument leads to an implausible conclusion, the argument may not be sound after all. This is a somewhat weakened version of the argument known to philosophers as reductio ad absurdum: if an argument leads to an absurd or unreasonable conclusion, then the argument is fallacious. This is true even if the cause of the failure is not immediately apparent. Thus a rational argument that undermines the idea of rational thought is a self-contradiction.
My sense is that Dr Leavitt finds the idea of radical skepticism to be self-evident. If that is the case, then traditional philosophy will not be of much use to him. He might want to consider the world of art as a better medium for his self-expression and self-understanding. I suppose that the best artists fully engage their minds with their work, but this engagement is somewhat different from that of a scientist, historian or philosopher. It relies more on a kind of immediate intuition and less on discursive reasoning. In the early part of the Twentieth Century, there was an art movement known as ‘Dadaism’, which seemed to hold convictions similar to those of Dr Leavitt. Marcel Duchamp was a member of this movement. Dadaism can be partly understood as a response to the horrors of the First World War. When we find that our practical lives are becoming increasingly irrational, then the idea of a reasonable life starts to seem like a cruel hoax. Dadaism might be interpreted as an attempt to make an affirmation of life after intellectual death. But such an afterlife might be rather short. The life of the mind depends on finding some kind of balance between rational faith and rational doubt. Radical skepticism is too unbalanced, and so it cannot carry us very far. But the related idea of radical questioning might be more productive.
D.N. Dimmitt, Lawrence, Kansas
Dear Editor: Whenever someone makes some sort of philosophical claim, the burden of proof rests on that person to back up that claim. However, in asserting that radical skepticism is the correct epistemological position, Fred Leavitt spectacularly fails to do so. After all, if it is truly impossible for us to know the truth of anything, then how can he claim – without contradicting himself – that radical skepticism is true ? Moreover, he opines that reasoning is just full of pitfalls such as infinite regresses, and uncertain assumptions; but isn’t reasoning what he is utilizing throughout his article to prove his case ? To use – as he calls it – a discredited epistemological tool himself in order to prove his position is therefore the height of hypocrisy and wholly disingenuous. The death knell of his position comes at the end of his article, when he states “Possibility 2. Radical skepticism is correct. We cannot know anything, apart from the fact that radical skepticism is correct.” Critically, he doesn’t prove that; nor can he. Yet if one is going to go out on a limb and claim that one’s theory is right, then it must be able to be tested and verified. In short, one must be able to prove her stance. Dr Leavitt’s theory, therefore, fails because it cannot be proven to be right. On the contrary, it entirely appeals to ignorance.
Tracey Braverman, Brooklyn, New York
Who Am I?
Dear Editor: While it would be wrong to dismiss the role of memory in identity as described in Sally Latham’s ‘Shaping the Self’ (Issue 110), I would also point to the role played by the fact that we are always at a particular point in (subjective) space and time, with an experience of continuity. Consider some thought experiments derived from the movie The Sixth Day starring Arnold Schwarzenegger. In it, a corporation has developed the technology to clone individuals and implant memories (back) into them. Henchmen are killed; then, thanks to capital and technology, basically resurrected.
Would this necessarily constitute the resurrection of the individual that died? The problem for me is that being killed and brought back as a perfect replica of myself would still involve a major disruption in the continuity of my particular presence in space and time. My replica may be just like me and have my memories. But still, would it be the ‘me’ that had died? A rash materialist might boast, “But of course! Same body; same brain; same you!”– but we might wonder, if the technology did exist, would they be willing to put their money where their mouth is? Or, being civilized people, we might settle for a less drastic scenario: one in which the replica was created while the original is still alive. Same mental makeup and memories; but in this case we could confidently say the original identity is not continued through the replica, since identity is anchored in the original body.
Again, I am not dismissing Latham’s point. Rather I am sharing her uncertainty, and drawing out what seemed implicit throughout her explorations – that what we are dealing with is not a single criterion, but a feedback loop between mental states, memories of those states as recorded in any kind of journal, and the perceiving thing – that particular and continuing point in space and time that we, as conscious beings, always are.
D. Tarkington, Nebraska
Dear Editor: In Issue 110 I found Sally Latham on memory and identity very interesting. Perhaps another view of identity, starting from first principles, might be worth considering. In the first place, this is what ‘I’ know about ‘I’: ‘I’ can see the body I inhabit in the mirror, but not the ‘I’. I am directly aware of ‘me’ (I assume you are aware of ‘you’). But without a body to inhabit, ‘I’ would be meaningless. I am the thing that drives this body. I am the thing that is conscious of the world, constructing in my mind a picture of the world which contains everything I know. Some things I can picture, but others have no pictorial form. I cannot picture ‘I’ or my consciousness in any form. There are many things in our picture which have no pictorial form or any position in space. The ‘I’ is one of these – it is the necessary subject of consciousness. To look for it anywhere is a category mistake.
Roy Anderson, Yorkshire
Atheism On Its Own Terms
Dear Editor: At the end of his letter in Issue 111, Dr Stephen Anderson expresses the hope “to see someone attempt a rational defence of Atheism on its own terms.” So here goes! Atheism does not deny the existence of one or more gods; rather, it denies the conceptual coherence of any god hypothesis yet offered by any religion. The question of such hypothesised gods’ existence then simply not arise. Atheism can thus be seen as the application of a general approach to knowledge, which requires any hypothesis worthy of consideration to be conceptually coherent and potentially testable, and which denies that belief, however strong, is evidence for anything. This is a thoroughly rational position to maintain.
Roger Jennings, London
Dear Editor: In my language game, and apparently in the language games played by those of your readers whose letters appeared in PN 110, atheism is used to mean no more than the idea of seeing nothing persuasive in theists’ arguments for the existence of God. The term entails no claim to prove God’s nonexistence. There’s nothing irrational in that. In the language game as played this way, what would be irrational, absurd even, would be to discuss atheism without reference to theism. Christians I have known have also readily conceded that they cannot prove the existence of God. For them (and perhaps Dr Anderson shares this view), there would be no reason to abandon their faith unless their God’s existence were disproved. There’s nothing irrational in that position, either.
Dave Mangnall, Cheshire