How shall we live?
In closing the NLnet sponsored project “Trust, semantic learning, and monitoring” I want to end with this summary of what I believe the study may tell us about the role of trust in our increasingly online and cyber-enhanced society.
(For readers interested in full research: the whole trust series is here)
Few subjects receive more attention than trust in the social science literature, yet it wasn’t always so. Two books, published in the 1990s, kicked off an avalanche of writing:
- Robert Putnam’s Making Democracy Work: Civic Traditions in Modern Italy (1993), and
- Francis Fukuyama’s Trust (1995).
These works contributed to make trust part of a larger and more complex notion of social capital, but didn’t obviously result in a clearer description of what trust does. Why are we still talking about something so obvious? Even after thirty years of subsequent work, no one has really gotten to the bottom of what trust is on a scientific level.
During this period, I was developing Promise Theory for something entirely different: studying agent based software systems. The two topics met tangentially, but resulted in a fortuitous series of papers that ended with the NLnet project this year. Let me try to summarise the state of that work and what I think it might mean for the future.
A muddle of our own making
Why study trust? Let’s simply say: we should distrust what we read about it. Unable to formalise the subject convincingly, no field of research has made much progress in defining what trust is. We trust that we know it when we see it!
The social science literature relies heavily on questionnaires, about good or moral behaviour, to measure when people believe they trust. Apart from feeling trite and superficial, these fail to plumb the depths of what’s going on with social capital. Similarly, in the world of Information Technology (IT), ideas about trust are equally superficial and vacuous. Trust is something of a dirty word in IT. “Thou shalt not trust anyone” is the prevailing narrative. Trust is acclaimed as the mortal enemy of security.
Making trust into either angel or daemon feels equally vapid. It surely evolved for a reason, so what was it? That’s not to say that certain essayists on the matter have not gotten close to an answer, yet their polemic seem to have been largely ignored–research is now an industry that doesn’t always profit from ending discussions. Natural sciences, like neuroscience, haven’t accomplished much either (see the selected literature). It was only through an unexpected series of accidents that I ended up working on the issue from the perspective of Promise Theory and found a simple way of summarising all those ideas that resulted in some measurable predictions.
Over a decade ago in 2006, Jan Bergstra and I started to discuss the role of trust in Promise Theory, but we never finished the work. We knew, from the precepts of Promise Theory, that trust had to be built on the idea of individual assessments, involving intentions in some way, and that assessment was a completely ad hoc judgement for each individual in turn. Such a view was particularly hard for IT folks to swallow, as the standard lore in IT is that all answers are absolutely determined and must either be true or false. Over the years, others too have had the same essential ideas about trust on a heuristic level–but it’s only with the formalities of Promise Theory that a quantitative picture of trust seems to show consistency.
The short version is this: trust appears to be something much simpler than a moral concept (just as promises turned out to be simpler than obligations): indeed, it takes the form of a heuristic regulator for attentiveness. If we assess something to be trustworthy, we don’t look too hard, so it saves us the work of attending to what is going on; it frees up resources for better use. Trust promotes and is promoted by continuity (steady promised behaviour) and is demoted by unexpected transients or surprises (impositions, accusations, etc).
The trust system, as described, fits neatly into what Daniel Kahneman would call cognitive “system 1”. It’s a quick and approximate rush to judgement, which can later be dissected by more laborious “system 2” methods if it should become necessary.
How trust works
Most of the writing about trust concerns why and when we trust. It’s tacitly assumed that everyone knows what it means — but what’s it for? What’s the function of trust?
Let’s not try to repeat what makes us assess something as trustworthy. Enough had been said about that–indeed Promise Theory deliberately keeps out of how such assessments are made. Rather, let’s focus only on how the mechanisms of trust shape decisions that build on the economy of attention for each agent in a system.
It’s a social capital–a common currency–to be sure. Just as money behaves like an action potential, similar to energy (or vice versa), acting as a rough predictor of action, so trust also has those qualities for higher level processes between “agents”. But there is a better analogy than capital available in the natural sciences. It’s perhaps disconcerting to think that something made of counting tokens can predict our behaviour–even in the broadest of terms, but it seems to. Whatever we think about freewill, there are constraints at work. As an assessment of activity, trust is more like the concept of energy, with its kinetic and potential forms. Its moral semantics are of secondary importance.
In the NLnet project work, trust emerges as an accounting system for investing our attention. It has two essential parts: what we commonly call trust and trustworthiness (Trust and Trustability, if you like). Trustworthiness (or potential trust) is a passive assessment about another person or phenomenon; it summarises the cumulative reliabilities of past experiences (the continuity of our experience of them). Kinetic or active trust, on the other hand, is a policy we formulate ourselves: how much effort are we willing to either invest or forgo in watching over a relationship with something or someone? Everyone makes up their own mind about it. Our decision varies according to how busy we are and how much spare capacity we have.
If you can’t trust, you have to pay attention in order to verify–and verifying is expensive. It requires our attention: our time, our cognition. Verifying means sampling the ongoing process of interest regularly, like a tax. A simplistic summary goes like this:
- Trustworthiness is our assessment of past reliability in keeping promises (advertised behaviour). Because we don’t have data on everyone all the time, it’s treated casually and is used like an exchange currency. We borrow impressions and reputational hints from anywhere. The key is that assessment should be quick and cheap.
- Mistrust, on the other hand, is the amount of attention we invest as a result of that assessment of trustworthiness. The more we mistrust, the more effort we invest in verifying and validating behaviour. The more we trust, the more we look away and save ourselves that effort. Trust then is an important work saver–the decision to forgo attentiveness.
Studies tend to muddle active trust with passive trustworthiness, perhaps because our language for these (in English) is ambiguous. It’s normal to comment mainly about trustworthiness, yet it shouldn’t escape our notice that for the active trust there’s also a continuum of degrees between innocent curiosity and apparently more toxic mistrust. In essence, these two are the same phenomenon. Only our somewhat fluid semantic interpretation of them is different.
The key insight from Promise Theory was that the two kinds of trust play basically the same kind of predictive role as potential and kinetic energy play for physical processes. Trustworthiness is a summary of past predictability, i.e. reliability or continuity in keeping promises, while mistrust is a level of kinetic activity or busy work induced by that assessment. It’s about an economy of attention.
Where we focus attention
Trust clearly plays a role in shaping society, but it turns out that it doesn’t work in quite the way we think. Our preoccupation with moral issues has left us confused. We’d like to believe that goodness brings us together, but that’s not what the data reveal. According to group studies, we don’t come together because we trust: we come together because we align our intent to mistrust. Often we are curious or driven to pay attention to some unifying element around us. It might be a leader, a common threat, a promise or a shared task. We stick together because our interests align and we suffer the mistrust of others until we can no longer justify it.
In the past, society meant dancing around fires, enduring feudal rulers; we embraced slavery, and hailed emperors. We’ve been through peaks of civility, rigid in protocol, and low points of savagery, none of which were heights of morality. Trust has been a part of us all this time. So did trust change? Whatever trust does, it has allowed us humans to manage the invention of society, or the scaling of stable and coherent action in groups that don’t (immediately) disintegrate into conflict. It overcomes group pressures and maintains continuity of collective action–but it does so with widely varying semantics.
Attention is not just for humans. As we build more proxy technologies and depend on them as infrastructure, attention becomes a machine concern too. Not only do we have to balance when the self-driving car is paying attention with when the backup driver is paying attention, all the so-called "generative AI" technologies use selective attention to contextualize situations and frame appropriate actions.
Are we still paying attention?
The two forms of trust seem to be built into us, presumably on a hereditary level. That doesn’t mean they are constant in their effect, or are even fit for the purpose for which they originally evolved, but we still have them.
It’s not unnatural to think that those ancient instincts might need to adapt to a new kind of living for our impending cyborg age too. Perhaps they get in the way of modern living? We need perhaps to rethink where we direct and spend our attention in the busy modern world. How indeed might we adapt our behaviours or shift our attentions for the benefit of all? Here are some things to consider:
- We don’t pay attention to the same things anymore. We’re not looking out for predators while foraging for food, or minding the alpha male. The steady merging of human life with technology and all the shared infrastructure we’ve built (sanitation, water, power, communication, travel) has changed our survival narrative.
- Group action is offered through inanimate proxies now. We both have and expect more shared dependency than ever before–on a global scale, but less face contact.
- Survival skills, like growing food, that were common even two generations ago are basically absent in the broad population of the developed world today.
- We are more connected to others than ever before through technology, yet we are farther apart from close human experience than we’ve been for centuries in our cities and communities for the same reason. We put up barriers and increasingly experience one another through technology as the interloper, rather than meeting in the flesh. We exert different pressures on one another than in the past, and our use for trust is virtualized.
- We can avoid messages we don’t want to hear, but it’s easier than ever to find minds that won’t challenge us through our electronic telepathy. Our range is practically unlimited by geography.
- We might ignore those we feel are untrustworthy, but we also ignore those who are completely trustworthy! If mistrust has a function, it’s to make us curious about new ideas that challenge our own thinking and offer new possibilities.
- Marketing and propaganda try to co-opt our attentions: the exploitation of our curiosity and mistrust has reached all time highs (or lows, depending on your point of view). Our cognitive inputs are filled to the brim with impositions vying for our attention.
- In the modern world, we are expected to tolerate a much greater level of diversity and otherness than ever before. Improved communication brings it all closer to us. The effort of watching our backs drives us apart from that otherness and tribalizes us along more comfortable lines that we can trust–not just back to kin relationships, but to national identities, sports teams, etc. The desire to find an island of trust induces a reversal of tolerance and globalism.
So, is trust effective at serving the same purpose today that it once did? Are we asleep at the wheel of life, drifting apart in a state of spoiled lassitude? Conflict is one way to break society; indifference is another.
A decline for Western Civilization?
All major phases of past civilization came to an end eventually. If they didn’t experience complete collapse and extinction, then they underwent major transitions. Today, some suggest that our own Western civilization is declining from its peak. The signs are all around. The arguments are many, but here’s a few that points to consider about how we trust:
- Development. We divide the world into developed and undeveloped. Developed implies something that has stopped developing and rests on its laurels. This is particularly true of Europe, where the rapid post WWII rebuilding provided a temporary cause for coming together and raising the standard of living quickly. Standards of living peaked for most people in the 1980s and inequality is now rising back to pre-20th century feudal levels.
- Alignment. People are now disengaging from public institutions and losing faith in politicians, both due to the increased competition for attention from our electronic communications, and also from a policy of putting personal freedom ahead of collective responsibility. When seeds of collective action evaporate, people also drift apart and are attracted to other signals. We’ve seen the formula in the work on Dunbar groups. We become tribal and nationalistic again. There’s a rise in “far right” politics and “far left” politics. Sports teams fight each other just for the colour of their shirts.
- Originality. As we become more connected, it’s harder to separate ourselves from our environments and from one another. How can we define or justify individuality, originality then? How does this change copyright convention? We already see these issues playing out, especially with machine learning tools that explicitly mine information from the public view.
- Blame and boundary. Claims of foreign interference in national affairs, elections, and schools are a common accusation, and are always attributed to political adversaries. We disregard skullduggery in our own tribes and families and condemn the same behaviour in others. Trust is at the root of hypocrisy and deceit too: we see what we want to see.
- New incentives go missing. Kids are less interested in school today, if we read the news. Teachers are merely part of an infrastructure they can take for granted. Their attention turns to mistrust of the conflicts of their own cyborg worlds.
The symptoms of shifting priorities are everywhere if we care to look (if you’re curious enough, if you mistrust my assertion enough to check).
Trust in the Dunbar group numbers
The picture that emerges from Promise Theory is that humans form groups mainly in response to unifying stimuli that grab our attention. This gives us something to test in scientific terms. Our greatest human invention is surely society itself: it was the depersonalization of relationships traditionally formed along kin lines, enabling the delegation of tasks, that has allowed us to scale up coherent communities much bigger than the Dunbar number (at least the most famous one).
Rather than trusting oneself and only oneself, some trust kin and tribe, pack, herd, flock, etc. Humans went beyond this to form even larger groups based on centralization of function and specialisation of skills when there was a task to accomplish together. The discovery of these seeds, which bring us together for cohesion and governance, whether by task, by king or strongman, or by abstract democratic balance, has brought coherence and continuity to society. Like cooking, cooperation yielded a major energy payoff in terms of time-saving efficiency, giving humans time and energy to explore and innovate.
A lot has changed about human social interaction since innate instincts like trust evolved. If trust is indeed an innate instinct, clearly present in other animals, then it evolved for a world that was very different from the one we live in today. Its purpose seems to be to be a rough effort-saving estimator of the reliability of others’ behaviour. Saving effort is a survival issue, so most likely trust evolved to make a quick and dirty decision about whether to get close to other animate processes in our environment.
Our sometimes infuriating tendency to operate by imposition (throwing tasks like balls, demanding, requesting, even accusing etc) is perhaps ironically a sign of a decision to trust in the tacit compliance of others–far more than we deserve. It’s another form of cost saving, a lazy way of asking for help. It’s also a possible reason why we eventually drift apart in the absence of something to hold us together.
Rather than build a stable protocol for exchange, inviting others by demonstrating honest intent, we often shoot arrows like cupid and hope for the best. If the other party lazily gives the benefit of the doubt, the trust on both sides amounts to lazy acceptance. Marketers use this technique in the hope of selling to the unaware. Some might call it a lack of diligence. Whatever your semantics, the effect is the same.
- We’ve learned to trust society and its infrastructure implicitly, so we disregard our environment and focus on self.
- We are drawn towards risky and curious temptations in the virtual world for self-gratification.
- We apparently need each other less, as we have everything in one place on our smartphones. We trust the privileges of technology, but we still engage online, because we mistrust the intentions of others on our private channels.
- And so we’re drifting apart from each other physically, and drifting closer to the unknown of online connections. Mistrust = curiosity = attention.
How do we know this picture is correct? It’s certainly counter-intuitive. But by chance, the NLnet project led to studying data from online forums like Wikipedia, and this offered a fascinating test of the energy hypothesis.
The data plot below shows how Promise Theory and data agree to predict the size of human collaborative groups during Wikipedia editing. This same theory predicts the Dunbar hierarchy of levels of human interaction, with a little neuroscience thrown in. It’s rare to find such close agreement between theory and data in a social science context.
The Dunbar groups size probability distribution, Promise Theory prediction and data comparison.
This picture of trust feels upside down to those who try to interpret trust in moral terms, yet (unlike the moral view) the attention picture is completely consistent, and it fits the data. We should take it seriously.
In our “developed” world, we can have almost anything we want at the push of a button. Our physical connection to each other is sometimes invisible, but our cyber-telepathy is growing. I don’t need you and you don’t need me. We just need our smartphones. Talk to the chatbot!
As we clothe ourselves in technology, we risk trusting the wrong things to sustain society. We trust our proxy infrastructure and machinery, because these seem predictable and don’t have a fickle agenda. We trust other people less, because they are less and less familiar to us. This breaking of the strong bond of human to human cooperation could easily undo the civil structures we rely on.
At first blush, it seems that trust might be incompatible with the increased freedoms we crave in modern society. Whenever we trust (kinetically or non-attentively), we create pockets of shelter where criminals or system parasites exploit and create disease in society. Societal disease is transmissible too, from society to society. We need constraints on living in order for kinetic trust to work. Consequences for untrustworthy behaviour need to be part of any regulation system. So, sad as it is to admit, security pundits might be right. The world we are on course for is not to be trusted too much. On the other hand, without the need to depend on one another to regulate relations, there is every reason not to care about others’ welfare. Smartphones take that away. But the counterpoint to wearing our distrust on our sleeves is that it too sends a signal that propagates: it’s used like an accusation, which in turn breeds further mistrust in a snowball effect. If mistrust gets out of control, it consumes all our attention.
For some, “developed” means “finished”, the unwillingness to change. One only has to look at how Asian countries accelerated their development in the 20th century once they were able to leave behind intransigent ideologies and embrace change, learning from what others had done to leapfrog into the present. There’s a sense in which Western civilization has become complacent and spoiled by past success. Lenin famously said that human civilization is never more than three good meals away from anarchy. Imagine if we lost electricity, roads, and (yes) waste and sewage collection and processing! We sometimes make fun of the Nanny State when governments decide too much for us. Now we have to consider when might technology do too much for us? Self-serving voices tell us the threat of our times is Artificial Intelligence. Perhaps the real threat is that we could destroy ourselves through complacency, through inattention, through trust–by failing to sustain the basis for human society.
How should we live?
What does this view of trust say about human society in the 21st century? Are we going in the right direction? Far from bringing about coherence, trust puts us to sleep. It’s when we trust too much that we drift apart from complacency. It’s suspicion and mistrust that keep us interested in current events–that keep us coming back to check each other out. That doesn’t mean we should watch everyone like a hawk either–then we’d have no time for anything. The issue for trust in the 21st century is: how do we want to live?
If we want to preserve the coherence of our society (as we understand it today), then the conclusions from the Promise Theory of trust might be summarised like this as the simplest of guidelines:
- Don’t impose obligations or activities on others (demand their effort) this lowers trustworthiness assessments.
- Don’t throw accusations or blame (contention wasting time), this lowers trustworthiness assessments too.
- Always be offering something to others (a reason to study and question rather than ignore you), this increases continuity and trustworthiness.
- Always be willing to accept what others offer (give the benefit of the doubt after suitable attention has been invested) so that you won’t be alone. You need others, like it or not.
- Don’t burn yourself out by spending mistrust excessively. It won’t necessarily speed up the promise chain ahead of you. It may even signal an unwanted imposition that costs you more in the end.
The verdict is in: pay attention to one another, but don’t take too much for granted. There must be balance in every economic network.
Today, we’re in retreat due to population density pressures. If we’re watching each other too much and we make use of (or depend on) what we see, there can be too strong a binding between agents. Then it becomes difficult to argue that the agents are independent and can make individual contributions. This changes the significance of traditional ideas about copyright and ownership, as well as the sense of privacy. Consider the most populous nations and how their attitudes may differ from our own.
These are rules for maintaining stable groups. If we see beneath the apparently moral facade, they are simply about saving effort by fostering alignment of intent. What is intent? It’s the direction of activity, whether chosen or emergent (if you could even tell the difference).
I suspect that we talk too much about trust and think too little about the promises that are at the root of it. If we paid more attention to both understanding and keeping promises to offer (+) and to accept (-), trust would be a simple matter. That surely is the point, though: we don’t pay attention to the right things precisely because trust encourages us to be frugal in spending our attention. We prefer to assume that our own part in interactions is not the problem: after all, it’s easier to moralise and even impose the fault back onto someone else. It’s cheaper to blame and accuse, particularly when under stress.
We are heading into a time in which we choose to redefine our isolation. The narrative of individuality and personal freedom, the economics of uniqueness, of standing out, makes us erect barriers against sharing so we can trust that we don’t need to pay attention to what’s on the other side. Increasingly, we don’t take the initiative to engage with others, we respond mainly to imposed stimuli. We’re becoming ‘reactors to impositions’, rather than creative initiators engaging mutually for continuity and purpose, even as we complain about our bosses barking orders, trying to impose on us and force our drifting attention. We’ve all but gone back to a wilderness world of lone predators again, by choosing to live in a virtual reality.
Is this too cynical a world view? Perhaps. Well, it’s only a direction of thought–extrapolated from analysis and with some data to support it. Change doesn’t have to result in a collapse of civilization, though some researchers think we are already on that trajectory! Perhaps we need a new User Guide for Trust in the 21st century? Our instincts tend towards throwing problems at others. We’re rather good at making tools for fighting, but we’ve invested little to nothing in technology for the purpose of peaceful coexistence, for scaling of fruitful and mutual human motivation. We retreat into independence rather than confront the promises that aren’t working for us.
If the fusion of humans with technology crosses a threshold of dependency in which we disregard and dehumanise one another too much, society as we know it might never recover from the smallest of shocks to the order we take for granted. It’s not AI we should be afraid of: it’s our own complacency.
Security pundits tell us we need zero trust. But that's too little: zero would be the end of our freedom. It would mean complete addiction to the latest updates, inescapable attention (if our phones are not already taking us there). Technology is not where the real threat lies. One only has to look at tech movements of today, such as Agile or DevOps, to see that people in tech are not too worried about technology. What concerns them most is how to deal with other humans. Machinery is easy, humanity is hard. We’ve lost some of those skills by our misaligned trust.
It will be a tough pill to swallow, for many, to change our way of thinking about trust–away from a moral narrative towards a pragmatic and functional one; but, if we in the “developed” world want to adapt to the emerging present, this is where we can begin.
Acknowledgement: I’m grateful to NLnet Foundation for making this work possible. Thanks too to Daniel Mezick, Robin Dunbar, and Jim Rutt for reading these remarks and for interesting discussions on related matters.
Appendix 1: User Guide for Two Part Trust
See the detailed summary of the trust project. For simple promises:
Appendix 2: Slogans
Around the millennium handover, I read a book called Smart Mobs about the emerging mobile phone technology. As part of my own research into IT systems at the time, I studied human control and governance, and wrote a book called Slogans: The End of Sympathy to better understand what I’d found. I wrote it as a novel, like the classic Science Fiction social catastrophe novels of John Brunner, Ayn Rand, and others, because I wasn’t qualified to write an academic paper; also, I wanted to capture the human element in a more visceral way. Even from what I could see about nascent mobile phone technology in 2003, I could see what was happening to people around me in Norway. A small amount of extrapolation (even before Facebook and social media) resulted in a picture of a world ruled by online addiction, in which people fled from public life into tribes. This (I believe) is basically the world we are heading towards today.
I predicted then that governments would have to wage a kind of information campaign (war) on their own citizens to teach them how to live together once again, because they would take their lives for granted. As tribes and gangs went rogue, organised crime would flourish and people would stop caring about one another. People became trafficked and manipulated. Special police task forces devised a computer game to trick people into playing and relearning the rules of civilization through the game. Is this to be our fate?
Appendix 3: Rights and imposition of obligations
The narrative of obligation and rights was one of the first ideas demolished by Promise Theory. We’ve almost come to believe we have a right to the infrastructure we take for granted. We trust these “rights” and apparent fixtures too much and pay too little attention to what provides for our present standard of living.
Promise Theory makes it clear that the language of rights is a modern affectation, a politicised red herring. There are no intrinsic rights in nature: we only have consensual or grudging agreements, which we may or may not trust. If we agree to behave in a certain way we can build a coherence to our behaviour. That’s quite different from a right. Laws and rights were invented to inhibit questioning. They are meant to be trusted, to quell dissent, to foster alignment. As long as no one is looking, trust keeps everyone from bucking the system–for better or for worse, good or bad. What this points to is that we should talk more about intent, about keeping steady promises, than about our rights.