Hands of a guy on laptop keyboard

Online trust: between competences and intentions

Published on 19 June 2014
Updated on 19 March 2024

Trust (or the lack thereof) is a frequent theme in public debates. It is often seen as a monolithic concept. However, we trust different people for  different reasons, and in different ways. Sometimes we trust that people can do something (competences). In other situations our trust focuses on their intentions. This text is about trust in online space. It is inspired by discussions at the WSIS+10 high level dialogue on cybersecurity and trust

Trust (or the lack thereof) is a frequent theme in public debates. It is often seen as a monolithic concept. However, we trust different people for  different reasons, and in different ways. Sometimes we trust that people can do something (competences). In other situations our trust focuses on their intentions. This text is about trust in online space. It is inspired by discussions at the WSIS+10 high level dialogue on cybersecurity and trust.

wsis photo

Relevance of trust

According to sociologist Niklas Luham, ‘a complete absence of trust would prevent us from even getting up in the morning.’[1]  Many of our daily routines presume trust.

Trust not only makes our lives simpler, it makes societies richer, as Robert Putnam showed in his study on trust and the economic success of Renaissance Italy.[2] The same logic applies to the success of Silicon Valley. Trust in institutions and in law frees time for innovation and creativity. In many parts of the world institutions are weak and trust in them is low. A lot of energy is spent avoiding being cheated.

Are current levels of mistrust greater than those of the past? Breach of trust has been around since Adam and Eve’s exploits in the biblical Garden of Eden. There has always been some failure to comply, and some of abuse of trust. But, our times and the Internet make trust more relevant. A significant part of our life takes place in online spaces, spaces which cannot be easily verified, particularly an issue for those for whom ‘to see is to believe’. With our growing interdependence, the stakes in trust (or the compensations we make in its absence) are higher.

Trust in the online world

Our online trust is machine-driven (‘mechanical trust’).  We treat computers as just another device that extend our capabilities. Just as a car takes us further and more quickly than walking, computers increase the reach of our written communication and access to information. We demonstrate our trust in technical devices by our reliance on them. Just as we trust that our cars won’t break down, and that they will bring us where we intend to go, we also trust computers to complete the tasks we require of them daily.

In the wake of the Snowden revelations, trust in ‘machines’ has evolved into a question of trust (or the lack thereof) in those humans that operate the ‘machines’. It is no longer about the competence of the machine – whether the Internet will function – but rather about the intentions of those operating it.

But it is not only about intentions. It is also about systemic changes in the way the Internet economy operates.  We – as users – are part of a ‘problem’, or a misunderstanding.  It would be naïve to believe that the richness of Internet services is paid for only by our Internet subscriptions (in Switzerland, it is Chf 49/month). The cost of ‘free’ Internet services is much higher in terms of software, servers, and reliability. The difference lies in the price we ‘pay’ with our data, which are monetarised by Internet companies through business models based on online advertising.

Does this unclear arrangement undermine trust in the Internet and those who provide its services? In most cases it does. But in some cases there is a ‘tacit deal’.  For example, I am fine with Google monetarising my data in exchange for free use of its Google Translate application. Whatever it earns by using my data, is fair compensation for helping me to overcome my  lack of talent for learning foreign languages. This is my ‘implicit deal’ with Google. But others might not like this type of deal. It is not transparent and may undermine users’ trust in the Internet.

What can be done?

We can start with a few simple steps ……

First, the way our data is handled (including the monetisation of data) should be fully transparent to users. This will help users to make more informed decisions on how they want to use Internet services and applications.

Second, governments and public authorities should require that terms of service (ToS) are clear, concise, and apparent, perhaps including a ToS in ‘plain language’. Governments could require that ToS be clearly available and not hidden. In particular, companies should ‘increase’ the font size when it comes to delicate stipulations in ‘fine print’.

Yet even these steps may not solve the problem, since it is not only related to the bad/good intentions of the main players; it is related to profound changes in the business model that question some pillars of existing values and rules (e.g. privacy protection).

Modern society may need a new ‘digital social contract’ among users, Internet companies, and governments, in the tradition of Thomas Hobbes’ Leviathan (exchange freedom for security) or Rousseau’s more enabling   Social Contract. The new deal among citizens, governments, and business should address the following questions: Should governments have a larger role in protecting our interests and digital assets? Should governments ensure that we have the necessary information, but leave it to each of us to decide if we are ready to give up a portion of our privacy in exchange for convenience?

A social contract could address the main issues and lay the foundation for the development of a more trustworthy Internet. Is this a feasible solution?

Well, there is reason for cautious optimism based on the shared interests to preserve the Internet. For Internet companies, the more trusting users they have, the more money they can earn. For many governments, the Internet is a facilitator of social and economic growth. Even governments who see the Internet as a subversive tool will have to think twice before they interrupt or prohibit any of its service. Our daily routines and personal lives are so intertwined with the Internet that any disruption to it could signify a disruption for our larger society.  Thus, a trustworthy Internet is in the interests of the majority.

Rationally speaking, there is a possibility of reaching a compromise around a new social contract for a trusted Internet. We should be cautiously optimistic, since politics, like trust, especially global politics (and global trust), are not necessarily rational.


[1] https://helda.helsinki.fi/bitstream/handle/10138/23348/trustasa.pdf?sequence=2

[2] https://cadmus.eui.eu/bitstream/handle/1814/317/sps20014.pdf

 

Subscribe to Diplo's Blog