×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Digital Philosophy

Affirmative Action for Androids

Jimmy Alfonso Licon asks, when should we prioritise android rights?

We should begin examining the question of whether there should be affirmative action for androids by first answering a couple of prior questions. Would androids count morally anyway? And what conditions, if any, would justify affirmative action for androids?

It’s easy to dismiss the possibility that androids, being machines, would deserve any greater moral consideration than a cellphone; but in any future world where androids have become self-aware, capable of suffering, and have projects and values which matter to them, to not give them moral rights would mean that the androids would be second-class citizens at best, and slaves at worst. Such an arrangement wouldn’t survive: many humans would feel bad for the androids, and the androids would resist.

So what are we to think morally about androids?

Do Androids Count Morally?

For something to qualify as the subject of fairness considerations, it must first have moral standing. As the philosopher Christopher Morris explains:

“[The] metaphor of the moral community is an interesting one. It makes possession of moral standing analogous to the political status of citizenship. Like membership in the political community, membership in ‘the moral community’ gives one a particular status that non-members lack, in particular a set of rights. [Something] has moral standing if it is owed duties. This understanding of moral standing connects it with the notion of legal standing; both are conceptions of a status that entitles the holder to something.”
(The Oxford Handbook of Animal Ethics, edited by Tom Beauchamp and R.G. Frey, p.262, 2011)

To put the point simply: to have moral standing is to deserve moral consideration for one’s own sake.

So it must be that for androids to be appropriate recipients of affirmative action, they must have moral standing. Androids of the conscious kind we have in mind would likely count as persons. And if androids were persons – self-aware, rational, capable of suffering, with a point of view – they would deserve moral treatment on par with humans. So if androids are persons, they would deserve fair treatment.

Some people suspect androids aren’t the kind of thing that could, even in principle, have moral standing. They think that androids, unlike humans, aren’t the kind of thing that could warrant moral concern.

Mechanical Man
Mechanical Man © Bill Mcconkey/Wellcome Collection

One intuition at work in the background here is that androids are synthetic, and thus couldn’t be persons. However, it isn’t clear why their physical constitution should bear on the question of their moral status. Even if they’re made of silicon and metal, while humans are made of meat, these differences appear incidental to the question of their moral status. Put differently: if androids and humans are relevantly similar ‘on the inside’ – in terms of having a sophisticated self-awareness – then the question of their material constitution is immaterial. And there clearly are logically possible androids that are similar enough to humans psychologically to warrant moral treatment. (We’ll put aside whether androids like this are technologically possible; instead, we’re evaluating the ethical implications if they are.)

There is a well-known thought experiment devised by Daniel Dennett in his 1991 book Consciousness Explained. It will illustrate why we should take the idea seriously. Suppose you have a rare cellular disease that’s slowly killing you. Upon consulting a doctor, it turns out there is a radical new treatment available. Slowly, over the course of time, your body is to be replaced bit-by-bit with synthetic parts, including your brain. The change will happen gradually, over the course of months and years, so that from your point of view the changes are almost imperceptible. From your point of view, you remain you, with the same memories, values, personality, and so forth. At the end of the transformation, even though you now have a constitution similar to that of an android, it seems from your perspective that you’ve simply survived.

Presumably this story isn’t impossible. It should also be clear that you preserved your moral standing throughout the procedure. But if so, it cannot be that your moral standing is a function of the kind of stuff you’re made of. The same holds for androids: if they have a similar psychology to humans, then they warrant similar moral considerations.

We are thus left with the distinct possibility that androids could warrant moral consideration if they have the right psychology. Androids like this would deserve to be treated as morally on par with their human counterparts. This opens the possibility that androids would require affirmative action.

Would Androids Need Affirmative Action?

Even granting that such androids have similar moral standing to humans, there is still the issue of what would justify special treatment for androids. If anything, their similar moral standing would appear to buck against special treatment through policies such as affirmative action.

There are a couple of reasons to think a compelling case for android affirmative action can be made. Firstly, that androids are made worse by the actions of humans; second, that humans are not psychologically disposed to treat androids fairly, even though androids and humans have similar moral standing. But we should begin by distinguishing between different varieties of affirmative action.

When people hear the phrase ‘affirmative action’, they often construe it in terms of prioritising identity over qualification. For instance, in the case of racial affirmative action, it is tempting to frame it as a policy favoring a black applicant over a white applicant even if the former is less qualified than the latter. This is why when the subject of affirmative action is broached, people often express opposition to the policy through the idea that a company should hire ‘the best man for the job’. But this is a woefully poor understanding of the policy. It’s true that on a strong version of racial affirmative action, minority applicants are favored over white applicants in virtue of their identity, even if they’re less qualified; but hardly anyone defends strong affirmative action.

Nevertheless, even if we think ‘the best man for the job’ is a good rule of thumb (antiquated phrase aside), it doesn’t tell us how to handle cases where, say, black and white applicants are equally qualified for a job. On weak affirmative action, the minority status of an applicant favors them only to the extent that it breaks this tie. On this weak version of affirmative action, we end up with qualified applicants for the job, yet past and on-going discrimination encourage us to favor the minority candidate when there’s a tie.

While some may balk at this for still running afoul of fairness, they often balk because they think appeals to minority status as a tie-breaker remain unfair. It would be better in such cases to, say, flip a coin, they might claim. But this would only be right to the extent that the applicants are on equal footing in every other relevant respect. Those who favor affirmative action reject this assumption: they hold that centuries of discrimination toward minorities put them at a current disadvantage, either because of historical trends that are still felt, or because of disadvantages embedded in the current system. Weak affirmative action aims to correct this.

Critics might argue that such unfairness is no longer a large enough factor to warrant favoring minorities in the workplace. This response is revealing, since the implication is that if significant discrimination were still a fact, there would be a good case for weak affirmative action. As the philosopher Alan Goldman writes:

“The rule for hiring the most competent was justified as part of a right to equal opportunity to succeed through socially productive effort, and on grounds of increased welfare for all members of society. Since it is justified in relation to a right to equal opportunity, and since the application of the rule may simply compound injustices when opportunities are unequal elsewhere in the system, the creation of more equal opportunities takes precedence when in conflict with the rule for awarding positions. Thus short-run violations of the rule are justified to create a more just distribution of benefits by applying the rule itself in future years”
(Justice and Reverse Discrimination, pp.164-65, 1979, my emphasis).

So the standard objection to affirmative action – that it unfairly ignores the merits of the candidates – doesn’t have the same bite against weak affirmative action. On a weak version of affirmative action, candidates must be equally qualified before considering the different paths each candidate took before applying for the job. The objection would have greater bite if each candidate had to overcome similar obstacles to be considered for the job; but that often isn’t the case. The devil is in the details; but this at least shows that weak affirmative action need not be unfair in favoring members of disadvantages groups who likely face background disadvantages.

These points have already been made in the debate over affirmative action for historically discriminated minorities; but they could equally apply to androids. If the system was rigged against androids who have moral standing, then we may need a policy change to address the resultant inequalities.

But why think androids in the future might one day require affirmative action? There are a couple of reasons.

First, we imagined a world where androids were created to serve humans, but were eventually freed from mere servitude because of the recognition that they had acquired the same moral standing as their human counterparts. It is plausible androids would be discriminated against in such a world because humans fear they will be cognitively superior. Better to keep them down than to risk switching places with them!

This is a common theme in history: we humans have an impulse to keep our boot on the throats of those whom we fear are a danger to us. There are numerous more mundane examples of this, too; from supervisors fearing they will be replaced by more capable employees, to parents who envy their children for doing the things they wished they had done but couldn’t. It wouldn’t be surprising then to find future humans motivated to band together to keep androids from forcing some humans into the secondary roles they once occupied. Moreover, it isn’t unthinkable that the humans would be better at politically organizing than the minority androids, so that they remain in power over the even if they had cognitive disadvantages.

Second, humans likely won’t generally trust androids. (This doesn’t bode well for future societal prosperity: societies with high levels of interpersonal trust tend to be far more prosperous than societies with low levels: see Why Culture Matters Most, David C. Rose, 2018.) Indeed, there is something unnerving about being stared at by an android. Psychological research on human-android interactions reveals that humans are highly sensitive to subtle changes in gaze that even being slightly off can trigger a sense of unease. This is perhaps because gaze is often an unwitting guide to the gazer’s goals and aims. This is lacking in the android gaze. So this sense of unease or alienness is likely to be a large obstacle to robust relationships between humans and androids.

This point dovetails nicely with the ‘uncanny valley’ effect: robots that don’t resemble humans at all aren’t unlikeable; but as they become more humanlike, while still being distinctly non-human, they are viewed as increasingly unlikeable, before they eventually resemble humans enough that their likeableness rebounds. As a couple of AI researchers discovered:

“[As] faces become more human than mechanical, they began to be perceived as frankly unlikeable. Finally, as faces became nearly human, likeability sharply rebounded to a final positive end point… although the most human-like robots may be more likeable … they may occupy a precarious position at which small faults in their humanness might send the social interaction tumbling… the Uncanny Valley is a real influence on humans’ perceptions of robots as social partners, robustly influencing not only humans’ conscious assessments of their own reactions, but also able to penetrate more deeply to modify their actual trust-related social behavior with robot counterparts
(‘Navigating a Social World with Robot Partners: A Quantitative Cartography of the Uncanny Valley’, Cognition 146, Maya B. Mathur and David B. Reichling, 2016, my emphasis).

One obvious solution to human-android interactions would be to make androids more humanlike; but there remains the issue of what to do when we’re in the lower parts of the uncanny valley. If an android has enough cognitive sophistication to have a comparable moral standing to humans, it would be a moral tragedy to discriminate against it on the basis of how it looks.

Even after passing through the uncanny valley, humans face further difficulties in adjusting to social interactions with androids. Just knowing that an individual is an android will likely be enough to put many people on edge. Even if we can adjust to regular interactions with androids, this adjustment will likely be slow. We then face the challenge of treating androids fairly in the meantime.

Android
Android by Cecilia Mou 2025

Countering Instinctive Biases

We aren’t blameworthy for our psychology any more than we’re blameworthy for our height, and our foreseen distrust of androids would be so deeply rooted in evolutionary processes that it will likely be difficult to overcome. And if we lack control over something, it is difficult to see how we could be culpable for it. As Kant writes, “[If] the moral law commands that we ought to be better human beings now, it inescapably follows that we must be capable of being better human beings” (Religion Within the Boundaries of Mere Reason, 1793, trans Allen Wood and George di Giovanni). Kant is stating a principle that philosophers have dubbed ‘ought implies can’: to say that we morally ought to do something implies that we have an ability to do it. It makes no sense to claim that humans have a moral obligation to solve global hunger by snapping our fingers, for instance. By the same token, we cannot claim humans morally ought not discriminate against androids, since it appears we will be incapable of this because of our evolutionary history and subsequent psychology. However, one way – though not the only way – we can fulfill our moral obligations to androids with similar moral standing to humans is by an indirect route: weak affirmative action for androids.

Even putting aside issues of justice in the debate over weak affirmative action for androids, employers would have a reason to prefer android candidates over equally qualified human ones: they probably overcame a great number of obstacles to even be in contention for the job in the first place. Consider: if candidates Adrian and Brian look equally qualified on paper for a position, but Adrian had to overcome greater obstacles to gain those qualifications, then Adrian would likely be better at the job than Brian, since to overcome the bias he faced, Adrian would have to have been that much better. As Dan Moller explains, when faced “with a pick of accountants at a firm, sound epistemology overwhelmingly suggests barreling past attractive, polite workers and urgently seeking out the ugliest, shortest, most boorish one available” (Thought: A Journal of Philosophy 2 (2), 2013).

There is empirical evidence showing that factors like attractiveness and height impact people’s judgments about merit and ability to an amazing degree, and furthermore, that these qualities are often (though not always) irrelevant to one’s job duties and should be discounted. Once we discount them, it is clear that job candidates lacking these qualities but having equivalent achievements likely are better qualified for the job than their more outwardly appealing competitors. The same applies to androids. So not only are issues of fairness at play to suggest weak affirmative action for androids may be justified – when job candidates are equally qualified, but some come from more disadvantaged backgrounds, there are practical reasons to think that the disadvantaged candidates are better qualified, further justifying weak affirmative action for androids.

There is a further reason that may support weak android affirmative action: we have stronger moral obligations to androids than we would otherwise, due to the fact that we have caused them to exist. This is for the simple reason that parents brought them into existence. The same logic applies to androids because they were created by humans, and yet face their discrimination.

© Dr Jimmy Alfonso Licon 2025

Jimmy Alfonso Licon is an Assistant Teaching Professor in Philosophy at Arizona State University.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X