Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Roberta Fischli & Thomas Beschorner argue that our digital future is not preprogrammed: it’s about time we start thinking about what it should look like.
Did you watch the dystopian TV series Black Mirror? The episode Nosedive paints a picture of a future society where people rate each other’s behaviour according to a credit system. ‘Good’ behavior is rewarded with points; ‘bad’ behavior reduces your score. This form of social control is flanked by all kinds of state surveillance: facial recognition technologies are used to track peoples’ every move, for example. When the episode first aired in 2016 it was already an allusion to and criticism of the emerging Social Credit System in China. But we don’t have to look that far. In recent years the Western world has also become familiar with similar trends.
‘Lacie Pound’, the main protagonist of the episode, plummets in this society. Her behavior results in a dropping score, and she finds herself in a downward spiral. Ultimately, she ends up in prison. Ironically, this is the only place of freedom in that society. The ‘prison’ metaphor also serves another important function in the ongoing discussion about digital transformation. It provides an allegory for the new surveillance capitalism that takes away our freedom and ‘imprisons’ us, albeit in a peculiar way. The utilitarian philosopher Jeremy Bentham once designed an ‘ideal prison’ called the Panopticon. In it, a few guards in a central observation towers would be able to monitor many prisoners. In our digital times, though, the Norwegian criminologist Thomas Mathiesen speaks of a synopticon, in which everyone potentially observes – and thereby controls – everyone else. Whether the observation actually takes place or not is irrelevant for the effect. As behavioral economists have shown, the mere possibility of ‘social moderation’ is itself likely to have a chilling effect on expression and action.
As important as our current scientific and political focuses on the implications of surveillance are, there are other equally important questions we should urgently ask. To start, the terms ‘surveillance’ and ‘control’ are associated with a specific understanding of freedom – freedom from something – such as from interference, control, constraints… The philosopher Isaiah Berlin called this negative freedom. This type of freedom is closely linked to the liberal values of Western societies. The idea of liberating oneself from unwanted constraints is also reflected in many of our conversations about the perils of digital technologies. Accordingly, the most significant parameters for digital freedom tend to be those that correspond with this liberal, negative, understanding of freedom: ideas such as autonomy, independence, free choice. Yet although this provides valuable intellectual insights – the danger of increasing surveillance for freedom of expression and democracy, for example – its paradigmatic standing in our approach to freedom risks us neglecting other equally important considerations in the digital world.
Just like any other contested core concept, ‘freedom’ has been defined and interpreted from a variety of perspectives. Two more are worth noting here: the ‘positive’ and the ‘social’ approaches to freedom.
Unlike the negative freedom from constraints, positive freedom is Isaiah Berlin’s name for having the positive ability to do something. Positive freedom stresses for example the importance of political participation and the pursuit of one’s own version of ‘the good’. For this reason, thinking positively about freedom invites us to think about the values and goals we’d like to see embodied, pursued, and maybe even realized by the digital transformation. So the questions concerning digital freedom are not simply How can we prevent a surveillance society?, but also, Which positive social and political ideals do we want to see promoted – and who gets to determine that?
The other understanding of freedom that provides fruitful grounds for discussion here – social freedom – is offered by contemporary German social philosopher Axel Honneth. Honneth argues that none of us live in ‘solitary confinement’ as an asocial ‘I’, but rather, we continuously interact with others as a social ‘me’: as family members, as consumers, as citizens. Who we are is heavily contingent on those around us. Our actions also determine others and make them who they are. There is a ‘we’ in ‘I’ and an ‘I’ in ‘we’. Put differently: Our fellow human beings constitute who we are as a person. So freedom is realised not only through them, or even in spite of them, but in them.
Positive and social freedoms invites us to dare a prison break from an overly narrow concept of freedom as the absence of coercion. And when this freedom from negative freedom is applied to the digital transformation, three areas emerge as particularly important.
It is well-known that software developers are mainly from a specific social group: well-educated white men. The consequences include algorithms that work in favor of members of that social milieu, ‘punishing’ others, by for example limiting their access. Where ‘relevant’ characteristics are determined – where who ‘we’ are is defined – is in the coding, the software writing. Consequently, the strong homogeneity in the development departments of software companies can be a source of systematic disadvantages for, or arguably even a degree of oppression of, other social groups.
One promise of the internet was that it would facilitate cosmopolitan connectivity and let spatial distance fade into the background. Indeed, today we can maintain business or social relationships that were unthinkable thirty years ago. This has expanded the possibilities of ‘us’. At the same time, however, developments over the past few years also point to something else that’s more detrimental than beneficial to a ‘we’: we seal ourselves up with our peers in social media bubbles and echo chambers. These new territories often constitute a new ‘us versus them’ division. Discourse fronts harden. In terms of Honneth’s thinking, these developments also limit or deprive us of the social freedom which constitutes us as persons.
Further, just as democracy requires informed citizens, our digital future cannot thrive without widespread digital competence. In this case, this does not mean a knowledge of programming languages, but knowledge of the effects of one’s actions online. Among other things, this might entail a responsibility on the part of social media users not to comment just on the basis of headlines and teasers, but to form a substantial and informed opinion concerning the topics about which they wish to post. This would be an important step towards a positive freedom of the individual in a liberal digitized society.
Fighting For Digital Freedom
The other statue of Liberty. In Riga, Latvia
Photo © Smurrayinchester 2018 Creative Commons
If we ask how we can make progress in these areas, three options are likely to guide action: 1) Self-control by companies; 2) Tough regulation by political institutions; and 3) Strengthening civil society.
One may be increasingly skeptical with regards to the first option. Ever since Facebook founder Mark Zuckerberg was grilled by a U.S Congressional committee on suspicion of influencing the 2016 presidential election, ethical self-commitment declarations have mushroomed across cyberspace. Whether this is more than mere talk can certainly be questioned. For instance, Google fired the prominent researcher Timnit Gebru from its AI and Ethics department in late 2020 for preparing to publish a critical research article. And Facebook restricted access for New York University’s Ad Observatory in 2021 because it was conducting research on issues of misinformation in political broadcasting.
In 1944 the economic historian Karl Polanyi was already skeptical about whether capitalism could change from within. Nevertheless, in his book from that year, The Great Transformation, he describes the historical interaction between democratic ideas and capitalistic interests in terms of a ‘double movement’ characterized by the continuous growth of the market being met by a ‘countermovement’ of the spontaneous efforts by politics, citizens, and civil society to check the expansion of the economy in certain directions and limit its disembedding from society – to various degrees of success and failure. The parallels with checking the expansion of surveillance capitalism are obvious. And in the words of Harvard professor Shoshanna Zuboff, “if industrial civilization flourished at the expense of nature and now threatens to cost us the Earth, an information civilization shaped by surveillance capitalism will thrive at the expense of human nature and threatens to cost us our humanity” (The Age of Surveillance Capitalism, 2019, p.326.).
Perhaps better regulation and a strengthening of civil society is more promising than relying on technology companies alone to take responsibility? The European Union, for instance, has already launched several policy regulation initiatives in recent years – notably, the ‘Proposal for a Regulation laying down harmonized rules on Artificial Intelligence’ in April 2021.
However, this and other legislative initiatives need to be accompanied by more democratic input and participatory elements. Civil society, especially in the form of non-governmental organisations, is crucial for reshaping the digital arena. On the one hand, NGOs can help the process of constructive criticism by contributing both their expertise and their varying social perspectives. Strengthening the voices of critics and opponents, but also providing sufficient resources to implement some of their criticisms, would be essential.
The role of civil society organisations is likely to be important for another reason too. The increasing use of AI seems unstoppable; in this sense the future seems to be pre-programmed for many areas of life. However, what’s being pushed into the back seat is the more fundamental question of which areas of society and which social decisions we want to exclude from the use of AI for reasons of principle. When using AI might run counter to our ideas of freedom, privacy or justice, and possibly even endanger our democratic-liberal order, we need to say no thanks. Do we want AI-supported weapons systems, or surveillance systems using facial recognition technology? Should algorithms be allowed to make triage decisions in intensive care units? Addressing these and many other similar questions, would be a prerequisite for gaining a truly human-centered perspective on digital transformation, in which humans are not merely ‘in the loop’ but also ‘outside the code’.
© Roberta Fischli & Thomas Beschorner 2022
Roberta Fischli is a PhD student and is currently a visiting researcher at the Center for the Study of Law and Society (CSLS) at UC Berkeley. Thomas Beschorner is Professor for Business Ethics and Director of the Institute for Business Ethics at the same university.