×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Technologists & Ethicists

Stephen L. Anderson laments inadequate moral insight among tech leaders.

For many of us the world’s first global pandemic for a century was a watershed moment. The Covid-19 crisis was plausibly claimed to have been caused by risky ‘gain-of-function research’: that is, through the deliberate enhancing of viruses. That would be a tremendous technological achievement, but also an ominous one. It would also be a global example of what can happen when human technical ability is untethered from adequate moral reflection: global disaster. Nor is this the first time we’ve glimpsed how terrifying human invention can be. As Robert Oppenheimer famously exclaimed when the first nuclear bomb was detonated, “Now I am become Death, the destroyer of worlds.” He wasn’t kidding. Due partly to his assistance, we are now indeed capable of destroying all life on this planet. None of us can doubt anymore that the entire world has a stake in seeing technology managed well.

I was reminded of this again when reading two books about AI: Michael Kanaan’s T-Minus AI (2020) and John Lennox’s 2084 (2024). The central concern of both is the disconnect between the impact of our global computer technologies and the ethics required to manage them. Kanaan is more guardedly positive about the possibilities of AI, whereas Lennox is less enthused; but both express similar serious hesitancies about where AI is about to take us all.

A lot is happening in the field right now, and it’s happening fast. Already, AI algorithms are being used to target personalized advertising, secure sensitive areas against incursion, identify travellers at airports and borders, and more. But it’s the future applications that are particularly concerning, including AI’s potential for regulating populations, suppressing free speech, restricting economic opportunities, hounding dissidents, and securing totalitarian regimes. No less a person than Vladimir Putin recently put it this way: “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Bluster? Hyperbole? Maybe. But when you drill down into what AI actually involves, what it already does, and what it seems poised to do, such statements seem less like science fiction and more like inevitable fact. This is an incredibly powerful technology, and one set to transform all aspects of life in ways we can now hardly even imagine.

The fact that technology has two sides to it – an ability to help and an ability to harm at the same time – is not news. Back in 1929, in his book Civilisation and Its Discontents, Sigmund Freud questioned whether modern life was actually making people any happier. He pointed out that every innovation comes with a downside. That we can travel and telephone, for example, means that our communities can become attenuated and broken. That we have medicine and hygiene can lead us to overpopulation. Had he lived long enough, he might have added that birth control can lead both to sexual libertinism and the chance for women to become sexualised objects for men. For every upside technology brings to us, there is inevitably a downside. While these are not always equal, gaining control of our technologies requires us to understand both: to weigh the various impending goods against the potential evils, and to make ethical choices about what we do in response.

Such decisions have never been more pressing; and in a way this is good news for philosophers, particularly ethicists. New technologies are continually appearing, the stakes keep rising, and with them, the necessity of moral reflection to inform our responses. No longer can we believe that we can manage our increasing technical capabilities without attending to their ethical implications. One would think, therefore, that ethicists in particular would enjoy increasing prestige and relevance in the public square, if not also guaranteed job security. There’s lots of work for them to do. Yet for several reasons, this is clearly not happening. New technologies continue to appear apace, but not the moral clarity that needs to accompany their implementation. This leaves many of us with a foreboding feeling of being caught in a machine that’s running out of control. The solution offered by prophets of the future of technology always seems to be the same: think harder, be more suspicious, and practice better ethics. But such trite exhortations will not meet the challenges. Ethics and technology are perilously disconnected, and we can no longer afford to treat that as inevitable. But fixing it isn’t going to be easy.

technologists and ethicists
By Cameron Gray 2025
For more art by Cameron Gray, please visit ParableVisions.com and facebook.com/camerongraytheartist

The Ethical Dichotomy

Here’s the main problem in a nutshell: The technologist is not an ethicist, and the ethicist is not a technologist.

The computer scientist is best positioned to know what new AI technology actually will be and do. She understands the technical details of what’s going on in it, at least as much as anyone can. But unless specifically trained in ethics, she isn’t going to fully understand that side of the interaction. This isn’t of course to say that’s she’s a bad person: or even suggest that she lacks good moral instincts. All I mean is that her focus is on the technical aspects. She does not have the moral philosopher’s knowledge of that subject. On the other side, the philosopher may know which ethical frameworks might be applied, and what insights a neo-Kantian, or a utilitarian, or a pragmatist, or an emotivist, or a virtue ethicist, or whatever, would be likely to offer. But however much he may have gleaned from his lay acquaintance with the technology, he does not have the technologist’s intimate experience with it or share his feel for its minutiae. We might make this analogy: the ethicist has the moral algebra, and the technologist has the numbers to be plugged into the equation, but since they’re not the same person, each lacks an essential part necessary to produce an accurate moral calculation.

It gets worse. One thing any ethicist knows is that the field of ethics is inconclusive. Within it are various moral frameworks offering proposals for how we might think through to particular judgments; but these various systems are incommensurable – that is, they don’t agree with each other. The big secret not known to most ordinary folks, is that the world of ethics is actually a cacophony of conflicting voices, not a singular voice speaking clarion moral truths. And that’s not merely true at the level of the thought-processes by which each operates, but with the conclusions each renders, too. The declarations of a Nietzschean or a follower of Ayn Rand are bound, in many cases, to be quite opposed to those offered by a care ethicist or an emotivist. A pragmatist, positivist or consequentialist is usually not going to end up with the same conclusions as a Kantian deontologist. Therefore one cannot say with any accuracy that moral reflection has arrived at a consensus that something is moral or immoral; only that the field of ethics has offered diverse opinions about what might be right in a given situation, with any conclusion being based on only a semi-informed opinion of which ethical theory is right in the first place.

Underlying this is the problem of legitimisation. To legitimise an ethical view is to be able to say on what grounds it should be accepted, rather than some other view. It’s to dig down to the foundations upon which a moral argument is built so as to show rationally, publicly, and in the face of skeptical doubts, on what basis we conclude we must do X and not Y, which is the basis of arbitration between competing ethical views. Yet the current fashion in moral philosophy and public discourse is to deny we can have such a thing.

It’s also the habit of our politicians and other public relations types to pronounce cavalierly that action X or Y is already safely under the trustworthy scrutiny of some group, panel, agency, or governmental body of ethicists, so the public need not worry too much that the new technology will get out of hand. The relevant experts have been alerted. The matter will be managed. The safeguards are in place. Nothing to see here, folks. In truth, the offerers of such reassurances are generally operating on nothing more than faith in an expertise they don’t understand.

The Technologist’s Problems

Let’s now turn back to the technological side of the problem. Just as ethicists tend to be certain kinds of people, so do scientists, engineers, and other practically-inclined creators. They tend to share certain inclinations: and one of them is the tendency to be focused on getting the task in-hand done, rather than, for instance, doing anything so abstract as questioning the moral integrity of what they’re implementing or trying to anticipate how future people might misapply the technology. A creator of new technologies primarily sees the upside of his work. If he didn’t, he’d quit trying to do it, or he’d alter the task; but often he’s simply enchanted with the positive possibilities, and not so engaging with speculative negatives. The technologist drives forward because he believes his innovation will be good: good for him, in that it will propel his career and status; but also less selfishly, that it will potentially provide positive outcomes for humanity, and in most cases, he’s content to let the ethical deliberation follow the technological achievement: “Let’s debate the right and wrong,” he says, “ after we have seen what we can do. Until then, ethical qualms are both premature and merely speculative.” Or, as Oppenheimer so precisely put it, “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success.” He adds, “That is the way it was with the atomic bomb.”

Herein lies the rub: the technology, once achieved, makes any ethical reflection merely after the fact: the deed is done; the technology exist; so what power does ethical reflection have to hold us back from implementing it, through academic concerns about its possible abuses? We have cell phones, and they do wonderful things; how tedious would it have been to dwell on their unsavoury implications for young people! What are we, Luddites? Or, we can clone sheep. Since, in principle, we can clone a person, too, why shouldn’t we do that? Don’t things evolve, and are not our technologies a product of that evolution? And why should we assume that moral reflection won’t keep pace?

The technologists also assure us that there are already ethics in place. In regards to AI, there is the EU’s Ethics Guidelines for Trustworthy AI (2019), America’s DARPA protocols, the UK’s Ready, Willing and Able report (2018), France’s AI for Humanity summit plan (2018), Australia’s AI Ethics Framework (2019), and so on. But which ethics do we apply? And if one set would do, why are there so many diverse conventions on AI ethics? None of which include Russia, China, or Saudi Arabia, of course, where ethics are not necessarily the primary concern, and where ‘ethical’ may well mean something quite different from our Western expectations anyway. One way or another, the technical work will surely be done before there are adequate ethical preparations.

Lest we imagine that unprincipled technologists are consciously deceiving us, though, let me add a caveat: even they don’t know what the technology they’re inventing will be used for. When the internet was conceived, it was with a view to scientific and medical information-sharing. Who among its inventors imagined it would become a commercial vehicle, gaming platform, and social media network, far less a powerful engine for advertisers, pornographers, and sex-traffickers worldwide? Had they known, would they have invented it?

It’s a good question. But we’ll never know the answer.

Technological Abstraction
Technological Abstraction by Paul Gregory

Caught In A Landslide

According to an old saying, “To a boy with a hammer, everything looks like a nail.” Technology, once produced, tends toward its most obvious uses, without regard for the intentions of the inventors, and also without particular regard for ethics. Things work for what they work for – be they good or evil. Inventors and engineers put them out into the world; but are rarely capable of knowing in advance what ethical entanglements their technology will encounter. By the time we find out, it’s often too late: the technology’s already in use, doing whatever it can, regardless of what we now think it should do.

The problem is that this combination of ineffectiveness on the side of both the technologist and the ethics expert tends to commit us in practice to a kind of surrender to technological fatalism. Whatever has been invented that is technically ‘sweet’, is simply used. Then we give up the moral task – the awkward job of reflecting on what our technologies are going to do to us – and no longer ask whether it’s right or desirable that we should allow them to do it. Technology starts to look inevitable: whatever comes simply has to be good. Ethical concerns arrive at the playing field far too late and lacking their full kit, and the game has been played and won already. Or do you know of any innovation that has been implemented and then rejected because its outcomes have been revealed to be inhumane or otherwise unethical? Only CFCs and certain pesticides and medicines. And one may think of eugenics: but it took a Holocaust to cast that into even temporary disrepute, and it continually threatens to resurface, if only because of the transhumanist evolutionary enthusiasms so common nowadays.

Speaking of evolutionary enthusiasms, I should mention an additional disconnect between technology and ethics: speed. Human moral evolution, if such a thing is even assumed to exist (and the historical evidence isn’t good) is, by all accounts, a process of glacial slowness. By contrast, technological development today happens at warp speed. It’s therefore inevitable that the pace of technological innovation will outstrip any expectation of moral philosophy catching up. We should therefore know for certain that no adequate ethical vetting of our technology is taking place. It cannot be.

Dark Futures

What all of this means, is that our technology will do to us everything it can possibly do, and human beings are no longer in control of what they invent. Enchanted with the cleverness of our own creations, we present ourselves as sacrifices on the altar of their greatness. We can no longer resist, decide or choose, since ‘We can!’ shouts down ‘Should we?’ every time. In our rush to seize the benefits we are powerless to avoid the detriments… even when the latter grossly outweigh the former.

For what, then, am I arguing? For a cessation of invention? For a halt to innovation? For another government ethics body to pose as capable of monitoring and constraining the new developments on our behalf? Or perhaps for giving up on moral philosophy – the much-touted ‘end of ethics’?

None of the above. But I am arguing for some basic realizations. One is that any confidence in progress appearing in both the technological and moral realms simultaneously is dangerously naïve. First, ‘progress’ is equivocal with respect to technology, and highly problematic with reference to ethics. Moreover, these two aspects do not develop in lock-step: technology is developing exponentially faster than moral philosophy. Another point is that we cannot be at all confident that the various government mechanisms purported to manage the ethics of technology are going to do that job adequately. There are no benign overlords watching and wisely administering the innovations. And if such measures do achieve anything, it’s bound to be little more than a brief holding period involving some confused debate; and then the broad implementation of the technology, whether or not the ethical issues have been resolved.

Our response to all this should take place on two fronts. On the personal front, it means that you and I have to take back responsibility for what technological inroads are permitted in our lives. This doesn’t imply any kind of unthinking hatred of technology, but rather a deliberate intelligent filtering of it… And because a technology’s effects are often not visible until after its implementation, this is likely to sometimes include a willingness to reject some technologies even if others are embracing them uncritically. What happens to you and me is ultimately up to you and me.

At the level of public policy, it’s clear that something better than our current general moral relativism (‘you do you’) will be required. The idea that all ethics can be treated as mere options, and that every position, from virtue ethics to pragmatism to nihilism, is equally valid, is simply not capable of informing the political responses we now so desperately need. Rather, any policy measures will have to be structured around some singular, commonly-plausible ethic of technology. In a democratic context, this will, of necessity, have to be legitimised very broadly. Now there’s a challenge worth taking up!

How is this to be done? Not by diktat from above: there is no rescue in turning to arbitrary fiat, or even totalitarianism. Instead, we need a public ethical discussion that’s both comparative and combative. It needs to be comparative in that it pulls ethical views publicly into conflict with one another again; and it needs to be combative because we need the winners to emerge. We’re going to have to make definite choices, and set public policies. If we’re going to make ethics count here, we need to stop being squeamish about the fact that there will be winners and losers in the debates, ideally with sufficient reasons publicly adduced for each. This implies that ethics will have to be treated with much more seriousness and with a much higher profile than ever before.

Mere resignation or capitulation to the inevitable is not an option. Nor will the current relativistic tendencies of morality in Western culture save us from the technological disasters of the future that seem to be bearing down on us with ever-increasing rapidity. So ethics – real, high-stakes serious, public ethical debate – is back. No longer can we view moral philosophy as a sterile cul-de-sac: it’s now a leading area of concern, with new work to do of an overwhelmingly significant kind. This is surely good news for the field of ethics. But is all this good news for mankind?

That’s quite a different question. But whether we like it or not, we can see that the future will be played out either with or without reference to moral philosophy. Don’t we already know we cannot afford the latter? In an age of AI, pandemics, nuclear threat, and an uncertain environmental future, we no longer have the luxury of remaining indifferent to the ethics of technology.

© Dr Stephen L. Anderson 2025

Stephen Anderson is a retired philosophy teacher in London, Ontario.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X