Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Articles
Information, Knowledge & Intelligence
Alistair MacFarlane considers the differences between these crucial concepts, and the implications for how we think about computers.
Anybody reading this will have interacted with a computer. These exchanges can hardly be called conversations. One no more converses with current computers than a soldier on parade converses with a drill sergeant. However, when working on the Internet using your personal computer as an intermediary, things are rather better. Searching for a book, inspecting its contents online, then ordering it, can be a much more satisfying form of interaction. You are never, however, under any illusion that you are dealing with a fellow human: you don’t ask the Internet, “What’s the weather like down there?… Have you read it yourself?” Computer-driven machines can now carry out a huge range of very highly skilled tasks, from navigating and landing aircraft, to manufacturing and assembling a wide range of products. How feasible is it that, in a few decades, one might have great difficulty in knowing whether or not one was talking to a computer? And if unable to finally evade all your attempts to engage it in stimulating conversation (“I’m sorry, but I’m too busy to chat with you right now, what was it you wanted?”), it could answer virtually any general knowledge question, provide detailed guidance over the whole range of literature and science, do seriously advanced mathematics, and play a mean game of poker, should you call it intelligent? After all, it knows the capital, population and main exports of every country in the world, and you don’t. Furthermore, are there serious implications for society if computers linked to machines and communication systems could run all the railroads, fly all the aircraft, manage all the traffic, make all the cars and other products, act as vast reservoirs of factual knowledge, and perform almost any other activity requiring great skill? These are not merely interesting philosophical questions. Should machines reach the requisite levels of knowledge and skill, their integration into society could pose very severe problems.
To address these questions we need to define carefully three basic concepts – information, knowledge and intelligence – and explore the relationships between them. A good way to begin to distinguish between them, is to note how they reflect our relationship to present, past, and future. Information describes: it tells us how the world is now. Knowledge prescribes: it tells us what to do on the basis of accumulated past experience. Intelligence decides: it guides, predicts and advises, telling us what may be done in circumstances not previously encountered, and what the outcome is likely to be.
Information
Information is a meaningful, shareable pattern. We have evolved as a species, and learned as individuals, to recognise and ascribe meaning to patterns. A good example is the text you are reading now. This linguistic pattern is realised physically, grasped mentally, and can be shared socially. Medical technology routinely correlates images of brain activity with mental processes, illustrating how our mental constructs are grounded in physical patterns in our brains. Over a vast span of time, evolution has given us a set of interlinked physical processes by means of which we can transform these information-bearing brain patterns into other physically-grounded patterns, such as sound waves and signs so that we can communicate, and so share our information.
Since the patterns that carry information have a physical realisation, we can measure amounts of information. All patterns, such as the pictures on a computer or phone screen, can be built up from fundamental building blocks; each elementary block supplying one bit of information. (The bit is the basic unit of measurement of information.) There is a well-established quantitative theory of physical information that underpins information technology. However, this theory completely ignores the semantic [meaning-related] and social aspects of information. Science and technology only deal with the physical part. For a satisfactory philosophical treatment, information must be treated as a triadic concept: we must deal with its mental, social and physical aspects. Despite this, some misguided attempts have been made to construct philosophical theories based solely on a physical characterisation of information.
Knowledge
According to Plato, knowledge is justified true belief, and this was also the view of many philosophers until recently. But in the light of modern evolutionary and biological theory, we now know this is only part of the story. We are born with innate knowledge, whereas we are not born with innate beliefs. A newborn baby knows how to breathe and suckle, but it does not have religious, or any other, beliefs.
Knowledge is a store of information proven useful for a capacity to act. Some knowledge is innate, but most is gained by interaction with the world. For the simplest kind of agents, like insects or robots, all their knowledge is built into their structure. Large molecular machines, like the ribosomes in our cells, ‘know’ how to make proteins because their complex shapes have evolved to possess this potential. A thermocouple ‘knows’ what temperature corresponds to the closure of a switch because its designer made it in such a way that the switch will close when a certain temperature is reached.
The store of knowledge with which we are born does not enable us to cope with all the problems the world poses. As we interact with the world and with society, we have to generate new knowledge by learning. This gives rise to candidates for knowledge that we call beliefs. Our actions are driven by knowledge and beliefs. In the jargon of philosophy of mind, beliefs are intentional mental states, meaning that they are about the world. Belief tells us how the world might be. Knowledge tells us the way the world is. Learnt knowledge is distilled out of beliefs. Explicit knowledge is distinguished from belief in terms of its coherence, persistence, reliability, and effectiveness: knowledge is achieved through an accumulated and refined experience which ensures that our actions conform to what we experience. So only sustained experience based on repeated interactions with the world can justify the upgrading of a belief to knowledge. But beneath the veneer of learning and experience, we are complex, irrational, and emotional creatures constantly negotiating our own compromises between beliefs and knowledge.
The ability to form beliefs and test them in action is essential to our survival in an ever-changing world, and the development of this ability was a crucial step in our evolution. The flexibility and robustness of intentional action has an immense evolutionary advantage over purely innate, fixed-representation-based activity such as is manifested as instinct. It allows the manipulation and examination of possible options before a final determination of action.
Only explicit, language-based knowledge can be easily transmitted via books and machines. But much of our knowledge is tacit; it is not expressible in language. This was memorably expressed by Michael Polanyi when he said, “We know more than we can tell.” This is true not only for innate knowledge, but also for those internalised skills which come from observing and copying others. Furthermore, much of our knowledge is difficult to access at will, and may require group interaction (brainstorming) to surface. Accessing and validating human knowledge will remain a difficult process.
Belief and knowledge are mental phenomena, which are notoriously resistant to simplistic physical explanations. Attempts to explain contents of consciousness such as belief in purely physical terms have had very limited success. The difficulty stems from their immaterial nature. They can be correlated with physical events, but correlation is not explanation. Description, however, is a much more modest goal than explanation. A top-down description of how our experience is grounded in our physical, mental and social interactions is therefore a philosophical construction or way of looking at the world, not a scientific theory.
Intelligence
Unlike belief and knowledge, intelligence is not information: it is a process, or an innate capacity to use information in order to respond to ever-changing requirements. It is a capacity to acquire, adapt, modify, extend and use information in order to solve problems. Therefore, intelligence is the ability to cope with unpredictable circumstances. But intelligence is not merely analytical: to survive and flourish in society, we must also have social and emotional intelligence. (However, I do not here assume an equating of intelligence with consciousness.)
Intelligence involves a capacity to adapt and learn that has been accumulated throughout our species’ development, yet intelligence is something we have by virtue of our individual genetic inheritance. By assiduous practice we can become more skilled. By diligent study and careful observation we can become more knowledgeable. But there is no way we can personally become significantly more intelligent. You can no more double your natural intelligence than you can double your natural height.
AI And You-And-I
‘Artificial Intelligence’ is a misleading term for the technology of endowing machines with agency (an ability to act autonomously in the world). In our present state of technical and scientific knowledge, a much better term would be ‘Artificial Knowledge’, although this doesn’t have quite the same ring. Recent advances in the technology for the search and retrieval of semantically-tagged factual information – as in the systems used by Google’s search engines – have led to astonishing improvements in the speed and relevance of Internet searches. (The semantic tagging is provided by human users: the more frequently users use specific search words, the more weight these are given by the search procedures.) By virtue of these techniques, information technology can make accessible a virtually unbounded store of knowledge.
This capability for the acquisition of knowledge applies not only to facts, but to any process that can be precisely and explicitly described in the languages used to program computers. Thus in principle, and increasingly in practice, machines can be made skilful in mechanical tasks to any level attainable by humans, and higher, and can be made knowledgeable to depths far beyond any individual human capacity. Creating machine intelligence, however, poses challenges of an altogether different order. The reason for this is that intelligence is basically a capacity to use what has been learned to deal with the novel and the unexpected. Intelligence deals with what we don’t know, so it is not easily describable by rules and procedures; it does not simply draw on a reservoir of facts. So this is a process for which we are presently unable to formulate a precise description that we could program into a computer. We must note however the crucial fact that much of the knowledge on which human agency depends is objective, formal and explicit, and so is of a form implementable via machine agency.
The differences, both qualitative and quantitative, between human and machine agency can be summarised in terms of three gaps corresponding to different levels of agency: a skills gap, a knowledge gap, and a personhood gap. We cannot hope to match machines in terms of the range and accuracy of their perceptions, the speed and power of their calculations, or the delicacy and precision of their manipulations. As processing power, the range and sensitivity of sensors, and the creation of new forms of actuator continue to develop, machines will become ever more capable over the whole range of manipulative skills. An ever-widening skills gap will open up between human and machine. Nor can we hope to match machines in handling intractable masses of data, or in applying processing power to complex formal systems such as mathematics. Computers are better at storing and retrieving knowledge, and at manipulating formal, symbol-based systems like mathematics. There will be an ever-increasing knowledge gap between human and machine.
However, there are immensely complex information-processing systems that have evolved in the human brain that cannot be replicated in any machine by any process of formal design or experiment, certainly not for decades to come, perhaps not for centuries. The complexity of our brains is vast. It has arisen from a compression of aeons’ worth of information accumulated over the evolution of our species into the genetic material that defines us, and there are no short cuts to its replication. So there will remain a personhood gap between human and machine that will continue to make human levels of intelligence, emotional insight and ability to handle uncertainty, unavailable to machines. Within any currently conceivable future horizon of prediction, human and machine agency will remain complementary. We will have to learn how to live with them, but they cannot replace us.
Living With Machines
We can now respond to the three questions posed in the introduction. First, could we, in the fairly near future, tell whether we were talking to a computer? Like all philosophical questions, the answer depends crucially on the understanding of the terms involved. This question is best answered in two parts: specific and generic. Specifically, for basic question-and-answer computer systems, there would be no financial incentive for making them any more sophisticated than necessary in terms of general knowledge and pseudo-conversation. Thus, although there will be a progressive removal of their more irritating limitations, it will remain clear that no human was directly involved. For more complex systems designed to give better access to general knowledge questions, or to solve mathematical and scientific problems, the answer is even counter-intuitive: the more they know, and the more rational they appear, the surer we could be that they’re not human! We would experience a dawning conviction that we’re talking to some sort of automated encyclopedia, perhaps equipped with a low form of cunning. It could be programmed in such a way that, when faced with any question it could not handle, it would always give an evasive answer (“I’m sorry but I’m not allowed to answer personal questions…”) So it will be increasingly difficult to be absolutely certain. You might, after all, be talking to someone who had just won a Nobel Prize.
So in principle, a supercomputer could, I believe, be programmed so that engaging it in general conversation, however wide-ranging, would not be enough to convince you that you were not conversing with a human. Skill in answering, and even more, in evading, questions, linked to a comprehensive general knowledge, and great ability to solve scientific, logical and mathematical problems, could be combined with social skills and a spurious empathy. A much more stringent approach would need to be applied to discover the truth, which would involve searching tests for creative intelligence, deep empathy and social adaptation. And devising adequate tests will become more and more difficult as processing power increases. If dedicated experts could no longer decide one way or another, the implications would be momentous. So, to answer the second question: Should we call such a computer intelligent? Not yet, but we could certainly call one very knowledgeable. However, achieving human levels of intelligence, empathy and emotional capacity probably lies far into the future.
The answer to the third question is that there are immensely important implications for society as computer skills increasingly surpass human skills, yet the huge economic gains that would accrue from an increasing use of information technology harnessed to machines will prove irresistible. Our ultimate challenge will not be how to further develop such technology, but to learn how to live with this in a new form of society. So the basic problems facing that society will be political rather than technical. Many of the core problems of philosophy will need to be re-examined.
There are grounds for both optimism and pessimism when contemplating this future. Those of a pessimistic nature may well conclude that the strains imposed on society by the continuing impact of information technology on employment, commerce and education may prove intolerable. Those of a more sanguine outlook might conclude that, with all basic material needs satisfied, and with the vast resources of an ever-increasing fund of technical and scientific knowledge readily available, we will be entering a new Golden Age free from want, disease and privation. Sceptics might point out that, when speculating on what might lie beyond a presently insurmountable mountain range, all we can do is survey thoroughly the terrain leading up to it, and issue some warnings.
The philosophical, scientific, technical, economic and social consequences promised by the ever-increasing use of information technology harnessed to machines are enormously significant. Scientists and technologists may find the challenges exhilarating, but the consequent political, social and economic difficulties for the rest of the population are daunting. Those who first see clearly what is in store for us will face an awesome prospect.
© Sir Alistair MacFarlane 2013
Sir Alistair MacFarlane is a former Vice-President of the Royal Society and a retired university Vice-Chancellor.