Re-imagining the Open Society in the Digital Age: A Bridges Project Position Paper
The guiding question for the 2019 Bridges Retreat is a broad one: if emerging digital societies supported social well-being and liberal democracy, what would they look like and how would they be governed? The best way to start answering this question is to put human beings at the heart of the discussion.
This project therefore focuses on how digital technology is shaping human behaviour and emotions in ways that influence people’s engagement with politics. Insights from this project will inform better approaches to regulating the digital society in future, through a deeper understanding of how aspects of human nature are shaped by online interactions and emerging digital technologies.
I. Humans in the online world
The profound effects of digital life on the human brain, both individually and in interaction with others, are increasingly widely known. For example, there is now a degree of moral panic among policy-makers about what digital technology is doing to children, and whether an entire generation has been negatively affected by it. To create an online world that fosters well-being would require: avoiding exploitation of human weaknesses; conscious choice by users about whether or not to engage; developing alternatives to the advertising-driven attention economy to reduce the potential negative mental health impacts of using social media; and enabling users to make wise choices rather than encouraging automatic reactions.
The dangers of biases and blind-spots
We already know about some of the deeper effects that digital technologies and social media have on human beings, for example at a neuroscientific or psychological level. For instance, the limits of our own knowledge of how reason works is central to an understanding of the impact of digital technology on ourselves. The concept of the purely rational human thinker was rightfully challenged by the seminal research of Daniel Kahneman and Amos Tversky in investigating and labelling all the various ways human reasoning seems to get things wrong, bringing the concept of cognitive bias into mainstream discussion. Their research found that – as long surmised by many in the humanities – a great deal of ‘rational thought’ relies on broad generalisations, ignoring much of available information and dubious leaps of inference.
Addressing the effects of these cognitive biases, as well as ethical and emotional blind-spots and habits, is particularly relevant to the digital transformation. This is because people tend to resort to using biases in reasoning or rely on emotional habits when they find themselves in a state of stress or anxiety, which digital communications can easily bring about. The information overload people must deal with when navigating online spaces is a prime example of an aspect of the digital environment which causes such a state of stress, and where such biases then become prevalent. More broadly, the large-scale shifts in power and functioning of society are driving a general unease and sense of vulnerability among publics. Against this backdrop of systemic change, upheaval and stress, biases and cognitive short-cuts are becoming more pronounced in public discourse and political reasoning.
Overall, this growing understanding of how the digital transformation encourages knee-jerk decision-making strongly challenges the assumption of neo-liberal economics that the market will minimise any negative consequences, and that human beings are generally good at making choices. The recent experience of how tech giants failed to prevent misuse of data and manipulations of elections shows that markets are simply not enough on their own.
How the digital world shapes politics
How do these effects then shape people’s political preferences and engagement in democratic processes? First, the digital transformation has created new expectations for voters, who have a growing sense that political parties and law-making are out of touch, slow and cumbersome. This is understandable given the changes in the way we interact with other parts of our lives. Online life is instant, seamless and transparent – while politics is often slow, laborious and opaque.
The online world has also created new affiliations and locations for people’s political engagement. The traditional model of mass membership of political parties is – mostly – losing support, but people are interested in new forms of affiliation, especially through social media and alternative networks. Digital technology allows people to find myriad new ways to express their political views publicly, outside of formal political spaces. These new “digital commons” reflect the hopes, views and beliefs of citizens – but, for now, it is not easy to connect these new debates to formal political engagement.
Online, humans are also confronted with an overwhelming amount of new sources and types of information. The internet has made vast amounts of data and a huge range of information sources across an enormous spectrum of issues available to anyone with an internet connection. This information overload, and the competition traditional media outlets face from web sources, is having a profound impact on politics and democratic engagement.
All of these trends present opportunities and challenges for liberal democracy. One of the most important effects is that the combination of cognitive biases and information overload are fuelling populist politics. Populist leaders claim that – contrary to elites – ordinary citizens (‘the real people’) instinctively know the ‘right thing to do’ and what is good for them; that they have more ‘natural wisdom’ and more ‘common sense’ than disconnected elites and that they could do a better job were they to be given a chance to exercise power. This explains their emphasis on various direct forms of democracy (like referendums), as well as the preference for blunt, majoritarian forms of democracy that concede little to minority opinions, preferences or values. Through the exploitation of speed, stress and bias, the digital sphere encourages the instinctive reactions that favours the politics of anti-complexity.
The best antidote to populist politics involves giving voters the means and confidence to reject this politics of anti-complexity — and this can only be done by giving them the space to develop different types of reactions and the tools to engage in a different kind of reasoning that necessarily draws on emotion, but encourages people to go beyond instinct. But the online world is extremely poorly placed to afford people this time and space for reflection and critical thinking – in this retreat we ask what it would take to change that and to turn the digital transformation into one that is also democratic transformation.
II. Political consequences for the public sphere
The need for limits on human behaviour and interaction is well accepted in the offline world, but there is no consensus yet about the need to set similar limits online.
After about three centuries of evolution, liberal democratic systems reached broad agreement over the limits that must be set on what the state can do to citizens and what citizens can do to each other. These limits are codified in legal and constitutional constraints. They are also internalised by citizens through social norms, education and political narratives that run through public institutions and are promoted by them. These norms include rights, positive and negative freedoms, obligations, responsibilities and duties.
The norms that pervade the offline world only partially apply online, even though the interaction is happening between the same human beings brought up in the same democratic cultures. We are now faced with a situation where abuses and unintended consequences are leading to a growing awareness of the need for a framework of limits that can also apply online.
Public consent to such a framework is vital, but such consent has not yet emerged. Although citizens in liberal democracies are used to trade-offs in the offline world (such as between privacy and security), the seamlessness and pace of the online experience leads to a situation in which people are less willing to make difficult choices. In the online world, it feels as though one can have everything – and everything all at once. Therefore, people fail to realise that the digital space is subject to costs as well as benefits, and that the trade-offs between these costs and benefits – e.g. a loss of privacy vs. greater convenience – need to be regulated.
The lack of an accepted framework of rules to regulate costs and benefits is now disabling citizens’ capacity to make meaningful and informed political choices. The headline-grabbing reasons are about manipulation, micro-targeting and fake news. But there are also deeper changes in the practice of politics driven by voters’ experience of online life: democracy seems disappointing and dull in comparison with the world of entertainment or shopping, while institutions seem remote and slow.
The role of the state in regulating online behaviour
The state currently remains the only entity capable of creating this new framework to serve the public interest. But it needs to update the institutions that codify the norms developed over the past three centuries to cover the online world as well. It is very difficult for states to lead on this issue: public institutions move slowly, whereas technology changes fast.
Nonetheless, states have to find ways to regulate online behaviour – both by individuals and commercial entities – if liberal democracy is to survive and societies are to remain open. This new frontier of regulation will inevitably reopen dilemmas that are taken for granted in the offline world, simply because institutions and laws were created to settle them. For example, laws were introduced that govern how to ensure free will and personal choice, and how to protect the vulnerable or to obtain justice.
Existing institutions were elaborated with a specific – real or imagined – human in mind: a modern, singular, reasonable being whose contours gave rise to concepts such as inalienable rights and duties. The rules were devised for individuals – e.g. the notion of natural and legal persons – but the beings that inhabit the online sphere are fragmented. The digital era is increasingly creating, or revealing, a version of the human being that is multiple, displaced and multi-facetted. Increasingly, we ask where the human stops and the machine begins. Or what is human and what is avatar? What is personality and what is network? In some cases, what is real and what is bot?
This creates a new set of challenges and a new subject for political philosophy. The first reaction to any technological development is to try to apply old structures to the new challenge. But it is necessary to think about what a really different form of governance might look like – with these (newly imagined) humans at the centre of it.
New expertise for the digital era
It already looks as if effective governance of this fragmented world will require a new pact between civil society and the state, based on new forms of expertise. If expertise is to survive as a key shaper of policy, then expertise will have to change from being closed to being shared. At present, we are in a transitional period that could be described as the worst of all worlds: expertise is at once increasingly the preserve of the few, and the illusory preserve of the many. Expertise will fail or survive depending on whether it is capable of creating its own systems for the sharing of knowledge and information that will renew its claims to legitimacy. New forms of expertise will need to be valued on the basis of their capacity to bring together disparate but complementary forms of knowledge and information – rather than exist as isolated specialisms.
The search for public consent could result in a new partnership for regulation. The state has to work with different types of communities and new forms of civil society to do this effectively, for example Wikipedia, hashtag communities and ephemeral movements that have no geographical basis. If this new form of governance worked well, it could create a new consensus that would transform agency – which is vital to defeat populism – by engaging people in a new form of politics.
III. What requires attention in regulation
Legal frameworks and social norms exist in order to prevent exploitation of vulnerabilities. But online it is easy and cheap to target trigger-points in human behaviour and emotion – vulnerabilities exploited through marketing of products, but also by political positions and parties.
Moreover, the online world removes the inhibitions that apply in society and can enable the worst in human behaviour. Therefore, we must ask how regulation can foster social norms that work in tandem with procedural rules and formal institutions to cause people to regulate their own behaviour. Norms can be set by communities rather than institutions, for example in the way that Wikipedia regulates itself to ensure accuracy.
Any proposed state intervention raises dilemmas: it is not easy to regulate in the public interest when there are trade-offs. Even when there is widespread recognition of a problem that needs a solution, that solution may require revisiting key political trade-offs that were made long ago in constitutions and other founding agreements for democracies. What follows are some of the most obvious dilemmas about regulating the digital sphere.
a. Spontaneous Mobilisation vs Constructive Participation
An individual’s or a group’s capacity to influence the political sphere has been greatly amplified by the new tools delivered by the digital transformation. Political movements can emerge abruptly and harness the power of networked crowds to suddenly pressure for a change in laws. Increased desire for political participation is generally positive for society. However, if politics can be easily swayed by movements as they arise then the normal functioning of representative democracy could be overruled by the demands of continuous reactionary movements.
Both the growing power of online citizen-led political movements and the means for external manipulation to accelerate the drive to activism have created a new and unpredictable political force. What regulatory mechanisms can and should institutions put in place to protect the functioning of established political deliberation? How can they do this without snuffing out the green shoots of new democratic demands by restricting the freedom to engage in political mobilisation?
b. Access to Information vs Quality of Information
On the web and social media, appearance is reality. In simple terms, one’s sense of reality can only ever be constructed from the information one encounters. Many people’s principal source of information is now social media and online content, and how this information is selected affects their sense of reality. Even without deliberate microtargeting to manipulate people’s views, social media often delivers poor quality information to users because of the way that information spreads there, with scare stories often crowding out real ones. No digital innovation has yet replaced rigorous research and good writing.
With the effort required to achieve high journalistic standards as the new limiting factor, there is an inherent asymmetry between those who wish to distribute just content and those who wish to distribute quality journalism. The result is quantity over quality for online journalism. As click rates and view times become the metrics for success, junk news, low quality content and outright conspiracy compete for attention, and hence compete for influence in shaping users’ beliefs.
Trust in information is vital for learning and debate, so is the quality of that information to reliably inform and educate the population. Given that social media and the internet are valued as sources of information, there is a duty to maintain the quality and veracity of online information to ensure a shared sense of reality among a well-informed population. In what ways should institutions limit the scope for manipulation and promote healthy, constructive debate while not curbing individuals’ access to information? What guidelines should be set to define such standards for information?
c. Freedom of Expression vs Quality of Online Interaction
Online spaces create new areas for public debate and engagement. In keeping these spaces completely open, individuals are free to produce and publish content that is constructive and informative or is intended to manipulate or cause discord. Without frameworks to guard against manipulative practices and deceptive content, the principle of open debate and freedom of expression can be abused to subvert attempts for constructive online discussion and interaction.
There are several means through which individuals can exploit trust in open online spaces and degrade the quality of interaction: online grooming, botnets, fake news, fake video and audio, etc. When individuals and online actors are capable of disrupting and manipulating online spaces, the unrestricted right to engage in open public debate is at odds with the need to maintain a certain level of constructive, quality debate. To preserve or create quality public debate online, institutions need to define a framework that may curtail open access to discussion as a means of defence against online actors that seek to exploit this principle of openness.
d. Privacy and Data Ownership vs Convenience
Digital innovations have granted the public seamless access to information and goods. This seamlessness is, however, predicated on accessing preferences and past behaviour – the more data is accessible, the more responsive and convenient the digital tool. Data is the fuel that powers the machine learning algorithms which are central to the production of revenue for tech companies and platforms. Individuals have given up privacy and control of personal information to get seamless online interaction, which tech giants deliver in return for revenue derived from use of the data they collect on users.
The growing sophistication of the machine learning models can infer highly intimate details about individuals from behavioural data. This inferential power in itself constitutes a massive risk to personal privacy. Personal data can be exploited further, by again applying machine learning systems to direct specifically tailored content that is most likely to achieve some goal, either by motivating action or shaping opinion. This industry of data collection and trading entirely removes control or leverage from the individual. Personal data can then be used to further encroach on privacy and influence individuals. Alternative models of data ownership and control are possible. What levels of privacy should states guarantee to citizens? What trade-offs will people be prepared to make in quality of service from tech giants to maintain their privacy?
e. Transparency vs Efficiency
The automated processing of data is necessitated by the sheer volume of data that is generated and dealt with in modern day functioning of society – e.g. in business, banking, institutional bureaucracy, and monitoring user behaviour. Using automated systems can make decision-making more consistent, accurate and cost-effective. However, the systems that perform the processing are not without fault, and such faults are painfully difficult to detect and correct. As automated decisions are integrated in the functioning of bureaucracies, these stages of decision-making become entirely opaque owing to the ‘black box’ nature of machine learning systems.
Beyond the lack of ownership of personal data, a further issue faced by individuals is the lack of transparency, accountability and correctability of decisions made by machine learning algorithms. Given that machine learning systems are themselves prone to fault – by reflecting cultural biases that exist in historical data or by using inaccurate data that mischaracterises an individual – there is a need for greater transparency. Can providing transparency in decision-making and delivery of information justify higher costs and less efficiency? What pressure can institutions place on services to provide this transparency? What is an acceptable cost to efficiency?
f. Privacy vs Security
Of the many debates that have raged in the past decade, few have been more pointed than that regarding the trade-off between the preservation of privacy and the necessity to keep citizens safe. This is not a new debate, but the reams of data held both by private interests that can be bought and state interests that can be captured in the context of new forms of terrorism, including online terrorism, have qualitatively shifted the terms of the debate. States sanction mass surveillance and data processing in the name of maintaining security for citizens, who do not know what data is being held on them or others.
Moreover, if tools to access personal data exist on every digital device, then every individual is open to complete surveillance. The awareness of surveillance and the potential for wholesale invasions of privacy have been demonstrably shown to impact behaviour, particularly in self-censorship and self-limiting behaviour. Hence mass surveillance and simply the capacity for mass surveillance is a means to control and limit behaviour. Given the potential capacity for total surveillance, what other concerns prevent the implementation of these tools and extending the capacity of state security agencies? What limits of privacy are individuals willing to accept in return for security? What effect does complete surveillance have on the behaviour of a population? Are the effects worth the trade-off for protection?
IV. How and what to regulate? A starting point
There is no ready-made set of principles for the internet age. The best way to arrive at these principles is by gaining a better understanding of the different aspects of human beings affected by technological change. The human is a moral being, a social being, a reactive being and a storyteller — but also an individual. Therefore, the human and social sciences are vital to providing different ways of considering the dilemmas outlined above. They help to guide the right questions as well as the right answers. The questions of how to regulate the internet and decide what kind of online environment to create for human well-being can be answered in a myriad of ways depending on the qualities one wishes to privilege.
Here are some of the lenses that can help to deepen this understanding through the Bridges Project:
- A philosophical lens could help frame the ontological and ethical problems facing regulators. Who is the human being online? What moral code would this human being respond to?
- Anthropology, with its emphasis on community and ritualised behaviour, can shed light on the nature of online relationships and exchanges, and on the codes that emerge in specific online communities and the functions they fulfil. This can guide an understanding of what people feel they need to find online and what binds them to forums in which they participate.
- Psychology is vital to understanding how humans respond to stimulus, how they construct their reactions and justify these to themselves and others and how they make choices. Understanding human behaviour online is in part about understanding human blind-spots and biases, and the functions they perform in people’s daily lives.
- Psychoanalysis brings in the importance of the sub-conscious, including fantasy, in how we feel empowered or disabled by the internet. What underpins people’s fears, enthusiasms and frustrations? And what are the limits of cognitive approaches to understanding online behaviour vs. the benefits of more humanities-based lenses?
- Neuroscience considers how the brain adapts to new environments, including the digital world.
- Political science puts power and the negotiation of power at the heart of the debate. How is consent secured from people when technology is changing their political preferences and engagement?
These perspectives help to illuminate how the various aspects of human nature are engaged by the digital world. In this respect they help take into account a more complex and complete human being and place her at the centre of the debate. Only then can regulation properly foster a healthier online environment that promotes the well-being of individuals, communities and polities. Regulation should make the internet more pro-social and pro-democracy by limiting the scope for behaviour that preys on weaknesses in human behaviour and by maximising the incentives for people to make informed and considered choices. The question is: how and by whom? That is the subject for our retreat.
Catherine Fieschi and Heather Grabbe