×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Artificial Consciousness: Our Greatest Ethical Challenge

Paul Conrad Samuelsson takes the perspective of the computer for a change.

Debate about cutting-edge technological advancements is philosophy à la mode. At the forefront is artificial intelligence, which looks set to become the greatest technological leap in history. No one can comprehend the extent of its possible uses; but among the feats already carried out by merely semi-intelligent software are beating the world’s best human players at chess, diagnosing cancer patients more reliably than trained oncologists, writing music that listeners can’t distinguish from the human-composed, and reading and commenting on extensive legal contracts in seconds. The potential applications of AI are so astounding that it seems we’ll be in a position to outsource all manual work, creative problem-solving, even intellectual labour, in less than a century. It is the greatest promise of our time.

Yet, when the great techno-cultural icons of our time get on stages around the world to discuss AI, the picture is not always optimistic. AI poses some truly enigmatic concerns. Some of the more existential problems have taken centre-stage, concerning the direct risk to humanity of the literally inconceivable potential of self-developing artificial intelligence. Sam Harris, Elon Musk, Max Tegmark and Nick Bostrom all warn of the risk that an AI which can improve itself could come to annihilate modern society as the consequence of a poorly-stated program or neglectful management. For instance, given some task to fulfill, the AI might work out that the easiest way to complete it is to turn the entire planet into a research lab, removing all functions not related to the goal, including all biological life – and doing this with all the emotional investment of a construction crew removing ant hills to make way for a new highway.

The prospect of mass annihilation at the hands of superpowerful computers is terrifying, all the more so for originating in something as human as faulty programming or sloppy routines. A multitude of movies and books depict menacing cyberantagonists creating hopeless dystopias, and this may strike you as the greatest moral risk we face in continuing to develop artificial intelligence. I happen to think that this is not the case, and that our new technology might yield even worse states of affairs. The greatest ethical risks in fact concern not what artificial intelligences might do to us, but what we might do to them. If we develop machines with consciousness, with the ability both to think and to feel, then this will necessitate an ethics for AI, as opposed to one merely of AI. Eventually, we will have to start doing right by our computer programs, who will soon fulfill whatever criteria are required to be considered moral subjects.

justice
Illustration © Jaime Raposo 2019. To see more art, please visit jaimeraposo.com

There are extensive arguments readily available for a positive answer to the question of whether computers could actually become conscious, which I can only summarize here. Basically, neuroscience seems to suggest that our entire conscious experience originates from our neural activity. This is not the same as a materialist reduction of the mental to the physical, but rather the assertion that whatever consciousness is, its origin is physical, in the brain and central nervous system. Those who argue against even the theoretical possibility of digital consciousness seem to disregard the fact that human consciousness somehow arises from configurations of unconscious atoms. Moreover, it seems that human neurological processes can be described in terms of neural networks – which can already be emulated in computers to the extent of allowing them to recognize pictures or play complicated games. These artificial networks are still comparatively primitive; but eventually – perhaps soon – they will surpass our own neural nets in capacity, creativity, scope and efficiency. So although the ability of computers to feel and suffer seems far off, it is getting nearer all the time.

Assuming, then, that we can come to create consciousness digitally, it ought to be obvious that the suffering of AI is potentially indefinitely more horrendous than even the worst imaginable human suffering. We stand in a position to develop the means for creating amounts of pain which vastly outweigh any previously seen in the history of human or animal suffering. The obstacles to creating biological suffering are demanding – the number of possible biological beings is relatively low, their upkeep is high, and they are prone to becoming desensitized to painful stimuli. In the digital world, when simulated consciousnesses can be programmed in computers to be subject to whatever laws we wish, these limitations disappear.

The consequences are not fully comprehendible, but let me sketch an image of what could be possible. Someone could, for example, digitally recreate a concentration camp, fill it with sentient, suffering AI, and let it run on a loop forever. It may even be possible to turn up the speed of the suffering, so that a thousand reiterations of the camp are completed every second. From the perspective of the AI, this will feel no different from what was felt by those who suffered through the real thing. Then the programmers use the copy-and-paste function on their computer, and double it all again… So the reason that pain-disposed AI is the greatest ethical challenge of our time is that it could so easily be caused to suffer. Picture a bored teenager finding bootlegged AI software online and using it to double the amount of pain ever suffered in the history of the world – all in one afternoon, and from the comfort of a couch.

If this description does not stir you, it may be because the concept of a trillion subjects suffering limitlessly inside a computer is so abstract to us that it does not entice our empathy. But this itself shows us that the idea of the suffering of a sentient program is not necessarily sufficient to give rise to empathy in us – making every person a potential monster from the perspective of the computer.

Perhaps this development seems exorbitantly unlikely, merely a perverse philosophical thought experiment. This would be a failure of imagination on your part. Artificial consciousness will be a desirable development to many, and will have enough applications to warrant significant investment. Financially, AI has already proven to be profitable. That we will make computers properly conscious is already foreshadowed by software which engages directly with us, as smartphone companions and chatbots. Whether by morbid curiosity or by financial incentive, applications of digital sentience will become increasingly widespread. Some people will want to upload their own minds to the Cloud or make backup copies on hard-drives; others will want effortless interesting adventures and interactions. The technology will become available, and sooner or later artificial consciousnesses will be made that are able to suffer, perhaps in order to ensure authenticity or self-correction.

If there are such things as cultural and moral progress, they pale in comparison to the technological explosion that humanity has experienced in the last ten thousand years, faster still in the last century. The advancement of invention is palpable, high-speed and tremendously useful to everyone – few people feel they need further motivation to embrace ever newer and more audacious gadgets, software, and weapons. Yet, as the story progresses, our inventions become more powerful and thereby riskier. So far, the potential mishaps have been manageable. Our historical nuclear disasters have been survivable because of their relative small scale. Artificial intelligence is an invention which promises to be far more destructive if misused. We have the existential risks to humanity which have already been raised by the authors mentioned above. Now we have also seen that there are consequences even more problematic than nuclear holocaust, as weird as that may seem.

Artificial intelligence has for decades been the greatest hope for transcendence and fulfilment in the secularised West. Chasing the unyielding dream of perfecting the world, convinced that we are entitled to anything for which we strive, as so often before, we put ourselves beyond morality. But now we’re claiming our reward at potential costs so terrifyingly great for others that they resemble Dante’s Inferno or Memling’s Final Judgement, perhaps as just the first monument of the forthcoming Homo deus.

© Paul Conrad Samuelsson 2019

Paul is a student in philosophy at Stockholm University and Humboldt Universität zu Berlin, and neuroscience at Karolinska Institutet.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X