Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Often Reasonable • Genetic Variation • Remembering Wonderland • Multiple Multiverse Problems • Social Rights • Imagining Kafkaland • Midgley’s Far Reach
Dear Editor: Eugene Earnshaw in Issue 119 provides a lively discussion of induction. Precise definition is essential to both deduction and induction. Consider deduction: all who commit murder should be hanged; he committed murder; he should be hanged. Unless we define murder precisely, the conclusion might be that someone who unintentionally killed by misadventure should be hanged – perhaps not what even the most ardent supporter of capital punishment would want. Or take the inductive argument that all swans are white; therefore the swans on your lake will be white. But your lake is in Australia, where they are black. We would have avoided the error with a more precise definition of ‘all swans’: it should be that ‘all swans so far seen in the Northern hemisphere’ have been white.
When we define the premises of a deduction precisely it appears that some deductions are really inductive arguments. All humans die; Socrates is human; so Socrates will die. But we can only say that up till now all humans have died. So it assumes that the future will be like the past when we say Socrates will die.
Once the past has been adequately defined we must assess the probability that the future will be like it. At least three considerations affect that. How extensive is our experience of the past situation in question? Billions of humans have died, and only millions of white swans have been seen. So it is safer to bet that Socrates will die than that we will never see coloured swans. Second, does past experience illustrate a universal rule? The set of humans belongs to the larger set of multicellular animals, which have all died so far. The set of swans belongs to the larger set of birds. They are not all white. That is another reason why it is safer to predict death than the whiteness of future swans. Third, we must determine the reason why the past has been what it is, and consider why that reason might cease to apply. The reasons swans are white and humans die are both biological, mainly genetic. Scientists are busily working to prolong human life. Sadly there is no interest in breeding red swans. So from that point of view it is safer to bet on the continued whiteness of Northern hemisphere swans than on certain death.
Those three reasons make it a safe bet that a dropped wineglass will fall to the floor. But precise definition is still necessary. It will fall provided that the intervening medium is no denser than the glass, and no one catches it.
Allen Shaw, Leeds
Dear Editor: Dr Earnshaw has many witty and suggestive things to say in Issue 119, not just about Hume’s problem, but about philosophy’s problem-solving problem. However, his squaring-the-circle argument is neither deductive nor persuasive, though it does capture enough of our modern notion of probability to appear reasonable.
His main argument’s two premises, that an individual will be randomly selected from the population, and that most members of the population blow the bugle beautifully, presume a future event and a current state of affairs respectively. The very form of the argument begs the question of a possible logical connection between past events and future occurrences. Further, each premise individually takes continuity over time for granted: the second premise does so by presupposing an ongoing stable population with a fixed characteristic; the first stipulates as certain an even less knowable future state. Thus there is little to the argument that is deductive besides its ostensible structure, and a whole lot of implicit inductive reasoning going on.
The key question in any case is: Does induction require a logical justification? Hume’s profound insight that induction is a settled habit of mind suggests not. Thinkers as diverse as Thomas Reid and Ludwig Wittgenstein have argued that belief in natural continuity is part of our fundamental way of seeing the world. Since rational argumentation takes this conceptual framework for granted, it doesn’t itself require any form of rational justification. Induction’s deep-seated ubiquity helps explain why Dr Earnshaw can’t avoid using the ‘unjustified’ continuity of nature as his argument’s primary building block.
For those still insisting on the need to rationally justify induction, the following might be as good as it gets:
P1: We think the future will be like the past (even if we sometimes get it wrong).
P2: We have no reason to think the future will not be like the past (we have no reason to think that we will always get things wrong).
C: It’s therefore rational to go on thinking the future will be like the past.
At least, this seems logical to me…
Robert Sawyer, Long Beach, California
Dear Editor: In his Brief Life of David Hume in Issue 119, Sir Alistair MacFarlane says that causal reasoning is older than the use of formal logic, since even stone-age man would reason that one stone is able to splinter another stone. However, surely that man could not reason, or even think at all, without the use of deductive logic? For after forming the causal (inductive) hypothesis that certain stones splinter other stones, he then must logically deduce that this particular stone will splinter this other stone. He would then be employing the hypothetico-deductive method, which, as its name implies, involves deduction.
Indeed, we can even infer that deduction predated induction, since if one caveman told his family that a rabbit is behind the bush then they could logically deduce that the rabbit is not in front of the bush. This does not require knowledge of any causal regularity. Causal reasoning is a type of thinking; but logic is a prerequisite for language and thought to be possible at all.
Dave S. Henley, Graaff-Reinet, South Africa
Dear Editor: Answering your recent reader survey, I asked for more on David Hume and less lads’ stuff like references to The Matrix, a film which few elderly ladies like me have seen.
Although I don’t normally do things like this, I thought I would write and congratulate you on having both Hume and The Matrix in the current issue, although I can’t remember where I found The Matrix mentioned. Hume, of course, was on the cover.
I am also grateful to Dr Earnshaw for resolving ‘Hume’s problem’, but I find it quite easy to deduce that the future will be like the past and that The Matrix will turn up again in the next issue.
Anita Miller, Alton, Hants
Dear Editor: The article ‘Informing People About Their Genetic Risks’ by Jan Deckers and Dominic Hall (Issue 119) arguably takes too narrow a view of the ethical issues around genetic testing by presenting an unrealistic scenario and misrepresenting the ‘joint account’ model of genetic information proposed by Parker and Lucassen (British Medical Journal 329, 2004).
The scenario described by Deckers and Hall involves the disclosure to a patients’s relatives of the presence in the patient of a specific APOE genotype which increases the risk of late-onset Alzheimer’s Disease. Deckers and Hall suggested that this disclosure could come ‘out of the blue’. Firstly, at least in the UK, clinical geneticists would be very unlikely to contact relatives of any patient out of the blue. Any contact would normally depend on the individual having shared their genetic results with relevant relatives. Secondly, given that it is a late onset condition with no treatment, where the risk is probabilistic rather than absolute, and the limited predictive value is further complicated by the likely existence of modifier genes, APOE is almost the paradigmatic case where contacting relatives would be likely to cause more harm than good. The major network of genetic testing laboratories in Britain (UKGTN) has no laboratories prepared to offer APOE testing in the context of Alzheimer’s (although testing is available for APOE genotypes in the context of hyperlipidemia, where treatments are available).
The joint account model is accused by Deckers and Hall of asserting that geneticists should “inform family members of their own genetic risks as a matter of course” and of using the ‘ludicrous’ analogy that genetic information is “like the information held on a joint bank account, where all parties are informed by the bank about financial affairs as a matter of course.” Actually the discussion by Parker and Lucassen is much more nuanced. Firstly, they use a more realistic scenario, involving the female carrier of a serious X-linked condition (Duchenne muscular dystrophy, usually only affecting males) who refuses to share this information with her pregnant sister. Their discussion includes the potential harm of non-disclosure to the pregnant sister, as well as the potential harm of disclosure to the original patient. Where the harm to the patient of sharing the information is relatively small, they say it could be argued that “justice demands the routine sharing of the benefits of genetic information except in exceptional circumstances.” Considerations of autonomy do not only apply to the first member of the family to be tested. Though every case will be different, and the relative harms of different courses of action must be carefully weighed, the joint account model is an important and workable approach that takes into account the unique features of genetic risk.
David Bourn, Newcastle upon Tyne
Dear Editor: Professor Tallis says (Issue 119) that there is no way that the brain, even with its estimated 86 billion neurons, countless trillions of synapses and various specialised areas, can fully account for our episodic memories. He proposes instead that there is a part of us ungoverned by the laws of physics which somehow does what is necessary.
Now I know that your columnist does not think much of computer analogies as explanations for the working of the mind, so let us start with old technology. If I had kept paper diaries of my life’s twists and turns, and wanted to know more about what happened on a particular day, I could look at the relevant diary to find out what had happened and any feelings I had recorded. If though I had kept my diaries in electronic form then I could go further and find out when I last mentioned, say, eating jelly and ice-cream, and then read the entry in full to find out what I had recorded about that episode. To achieve this, the data would have to be in a searchable form. As a minimum, each day should have a few relevant key words allocated to it which the software could match with the search terms. If, as I would suggest, this is an in-principle description of how our minds work then, contrary to Professor Tallis’s assumption, I would not have to know in advance what the entire entry looked like, or even that there was an entry, in order to find data which may correspond in some way with the sort of thing I was looking for. Of course, I may be disappointed, if the keyword ‘jelly’ were in fact a reference to ‘gelignite’.
Equally important is the reason for starting to look for such an entry in the first place. The recall of my episodic memories seems to be prompted by events currently going on around me, including other memories playing out in my mind. The point is that I don’t deliberately search for memories out of the blue, wholly unrelated to my present circumstances. This therefore means that I don’t have to first remember what it is I want to remember, as Professor Tallis says. I would suggest instead that memories are automatically retrieved in a continual, contextual, necessarily imprecise, search process. Professor Tallis says that having memories triggered by events would result in chaos because, in his view, everything is in some way related to everything else. But this presumes a search process which is far less sophisticated than, for instance, Google has managed to create to make money from us. Precisely how my memories are indexed and the search criteria applied I do not know, but I see no infinite regress in describing the process in this way.
Thomas Jeffreys, Coleshill, Warwickshire
Multiple Multiverse Problems
Dear Editor: I was interested in Rui Vieira’s article ‘Can the Multiverse Give You An Afterlife’ in Issue 119, but I believe Vieira is placing wishful thinking above likelihood for the following reasons:
1) The Multiverse is only one theory to explain quantum mechanics. However, since there is no observational evidence for the Multiverse, we should use Ockham’s Razor and test other theories that explain quantum mechanics before accepting multiple universes.
2) It can be claimed that there is an infinite number of substantial entities in an infinite Multiverse. However there is also an infinite number of conceivable entities. The question is, are these two sets of entities identical – are all the members of one of these sets in the other set? According to Cantor’s Paradox one infinite set can have more members than another infinite set (for example, the set of real numbers has more members than the set of integers, even though both sets have an infinite number of members). Thus all members of the set of conceived entities do not have to be a member of the set of substantial entities. Thus it does not inevitably follow that all possible arrangements of energy/matter will be realised an infinite number of times. Rather, nature follows the principle of minimising energy in producing entities.
3) Given the problem outlined above, if the replacing of first person consciousness can take place, our first person consciousness is more likely to be replaced by another first person consciousness than to do the replacing. This likelihood increases as we age, as the number of available identical bodies in parallel universes decreases with time due to the myriad possible ways we have succeeded in escaping the Grim Reaper. For example, how many times have we avoided being hit by the bus which eliminated one of our doppelgangers?
4) Our bodies have built-in obsolescence, so all Vieira is offering is a delay in our inevitable demise rather than an afterlife in any conventional use of that term.
We should focus on our real footprint, our effect on the world we leave behind, rather than dream of immortality.
Russell Berg, Manchester
Dear Editor: Jesse Tomalty’s great opening article in Issue 118, ‘Is There A Human Right To Internet Access?’, distinguishes between ‘natural’ and ‘legal’ rights. The notion of ‘natural rights’, however, seems to me a wholly unnatural thing. She says that natural rights are ‘held universally by all humans simply in virtue of being human’ and that, in contrast, legal rights are ‘social constructs’. Can this be true? Since natural rights are supposed to be in the interest of at least an overwhelming majority, it is difficult to see how they could be independent of the societies out of which they sprang.
Consider freedom. Freedom is only actively sought after because our societies impose some restrictions upon individual human beings. Because the concept does not make sense outside of a society, it follows that natural rights are society-dependent and, therefore, social constructs, like legal rights.
To avoid the conclusion that established natural rights evolved with societies, one needs to make natural rights so vague as to equate them with survival instincts, which would be unhelpful. A more useful distinction might oppose ‘legal’ to ‘desired’ rights; realised versus unrealised; compromised implementation versus uncompromised ideas, etc.
Pedro Grilo, London
Dear Editor: After reading Séan Moran’s ‘Daydreaming in Prague’ in Issue 118 (Prague being Kafkaland, by the way), I’m reminded of a distinction made by Coleridge between fancy (a rough equivalent to mind-wandering) and imagination (a rough equivalent to focused daydreaming). As Coleridge also said: “It’s alright to build castles in the air. The idea is to build foundations under them.” Moran complements this by recognizing the import of both fancy and imagination and how they interact. However, in both cases, the focus is on the creative act. But I’d argue it’s bigger than that: consider the coup that fancy, via consumer capitalism, has imposed on our culture and the damage it will do if left unchecked by imagination. As Moran rightly points out, fancy or mind-wandering is essential to any creative act for the very reason that it opens us up to unforeseen possibilities and gets us beyond ourselves. But, as its critics note, it is lazy in nature and prone to baser impulses. At its most innocuous, it leads to the basement overmen who sit in environmentally-controlled caves, their faces blazing in the glow of computer screens as they get ready for action in some post-civilization future. But even more problematic is the way it’s exploited through the media. Capitalism has perfected the art of selling possibility, with ads saturating us with visions of lives we could lead if only we bought the right products. Meanwhile, the culture of celebrity holds us in awe and reality TV and YouTube dangle the possibility that our lives could be so interesting as to gain us our fifteen minutes of fame.
With all this mental wandering, and minimal effort to anchor it, it’s no wonder that America (with collateral effects for the rest of the world) is looking at a presidency that galloped in on what felt like a Quentin Tarantino revenge fantasy; one that was and is propped up not so much on alternative facts as by the unchecked fancies of the alt-right.
Fancy and imagination – mind wandering and focused daydreaming – are not just about how we create; they are about how we come to engage with our world.
D. Tarkington, Nebraska
Midgley’s Far Reach
Dear Editor: Another brilliant (double) essay from Mary Midgley (Issues 116, 117). She rightly argues that matters of fact must intrude on some matters of value; but she doesn’t quite go the whole hog to state that David Hume was plain wrong when he claimed that you can’t get an ‘ought’ from an ‘is’. We can gain ethics from the practicalities of trying to meet our wants and needs, bearing in mind the commonality of those needs, the need for cooperation in meeting them, and the fact that one’s happiness does not usually rest on another’s misery. The real obstruction is that many don’t find the result of this process of thinking emotionally satisfying or convincing. But is that a fair reaction? This approach puts morality on a solid footing for the first time. It’s also worth remembering that it does not involve compulsion. That’s why we have prisons.
Midgley also provides excellent criticism of over-specialisation in philosophy. One crosses a line into nonsense if one assumes that sound philosophical procedure only ever involves the mental microscope and never the wide angle lens, or that wider intelligibility is of no consequence.
Daryn Green, London