Can we learn when to trust?

New research on relating trust, (machine) learning, and heuristics

Mark Burgess
18 min readNov 27, 2023

Would you get into a taxi on a late night in a dark alley? In your hometown? In a foreign country? What would help you to make that determination?

What is it that makes us trust? Few subjects have been written about more than this one, both in philosophy and in social science. Basic narratives tend to focus on things like social and moral notions, even religious traditions. For example, we may trust as a reward for someone’s good intent, or because it feels good, or when we consider that people are good people (a circular argument), etc. There are thirty years of academic papers on this issue, with little sign of an end. But no one has carefully tried to ask what trusting actually does for us. We know that we want people to trust us, but we’re not sure why.

In society, we’ve invented third parties and regulators like the government, companies like Verisign, even the church to help us with the pragmatic business of deciding. Meanwhile, our human fascination with morality has kept the ideas mired in confusion, guilt, and even shame. We foster prejudicial divisions within society based on whom we consider among our graced circles of trust. We associate trust with what we (or someone in authority) decides is culturally or morally good behaviour, sometimes merely of good background or clan, but on deeper consideration, all these notions are clearly mistaken. Trust is not about goodness. It’s involved in all kinds of menace as well as generosity.

If we want to know when or whom to trust, we need to be clear about what trusting means, or rather what it does for us. We no longer trust the oracles of the past to tell us the answer? Trust in government and religion are at an all time low. Could software tell us the answer? One can learn a lot about someone and their attitudes from their answers to these questions.

Could we ask an AI about trust? Can we trust AI? Can AI trust us? If you are a security engineer in our increasingly information-enabled environment, the question takes the more specific form: is it safe to trust and therefore have dealings with certain people, things, and processes? Alas, answering that question is not actually what trust does. The final decision about engaging or running away into the shadows is something we determine independently only after trust has told us we need to think about it.

What we know so far is this: the purpose of trust is to make quick judgements about whether to engage with or expose oneself to something. In survival terms, it has to be a quick decision because stopping to think about whether we can trust something for too long could stall us in responding to the very dangers we hope to detect. From an evolutionary perspective, deciding with certainty whether something is right or wrong, good or bad, safe or dangerous in its intent, is a highly individual and potentially expensive determination. We invest in societal laws and rules of behaviour to help people short cut some of that thinking, but it’s a slow process by any method. Also you have to trust the laws.

Logical reasoning (or what Kahneman calls system 2 thinking) is a luxury we indulge in after the fact, but in the heat of the moment it’s too slow to serve our best interests. Trust has to be from the gut. It has to balance the emotional impulse to fight or flee with a similarly coarse and impulsive level of caution about getting into that situation in the first place. We shouldn’t run away from everything after all.

Nature seems to have evolved trust in us as a valve to help us manage the effort of reasoning about–to short cut and simplify decisions. All this has remained largely mysterious until recently, distracted by endless blind alleys of belief about humans being the paragons of logical reasoning and moral values. Today, it’s partly thanks to what we are understanding about informatics and AI over the past thirty years, as well as the volumes of data generated by our information society, we have access to more clues than ever before in the cognitive record to understand what trust is and how it works. To be more scientific about it, we would try to collect some kind of data and see if these ideas stand up to scrutiny. In a sense, that’s exactly what we’ve done by looking to data science and artificial intelligence research.

The answer is not what most of us thought.

Trusting the machinery

I’ve spent this year engaged in a project funded by NLnet to understand the semantics and dynamics of trust for the increasingly cyborg society we live in. It seems like an old problem to study, one that many have tried to tackle from a pragmatic point of view in the past–but particularly as we see a marked decline in Western civil society, it’s considered one of the important issues of our time. The good news is that, thanks to the research around human-computer systems and intent, we’re now able to take a more scientific approach to trust than before. It’s thrown up some fascinating scientific questions too.

So, can we learn when it’s “right” to trust? And if not, could a machine learn it for us?

Only a decade ago, machine learning was an obscure sub-field in a narrow group of academic journals. Emerging AI has since shoved us onto a roller-coaster of expectation concerning the abilities of machines to decide for us. The “Big Data” push has changed our thinking about what we might be able to know. Today, the spectacular computational indulgences of the tech behemoths (starting with Google) have shown what brute force scale in data and computation can achieve. In Data We Trust.

The goals of trust are not quite like the goals of “AI” as most of us see it today, so AI probably can’t tell us how to trust directly. The latter is all about getting answers to questions, as an outgrowth of search engines. Trust is about when you don’t need to do that search in the first place. Nevertheless, there’s a complementarity to these ideas and the methods for inference in AI and trust are converging. I’ve written more about the new view of trust earlier in this series.

What AI teaches us is a humbling lesson about how much effort is actually involved in cognitive reasoning. Our brains make it seem too easy! Like so many of the industrial achievements of the past, spectacular acts of Clarkean magic still conceal a supply chain sourced from impoverished workers in factories, mechanical Turks actually putting in the hard work to make a shiny facade for the more privileged classes. The sense of magic is nonetheless pervasive–and already expensive enough!

How many languages speak of intent?

What kind of data could tell us about trust? We should be cautious about expecting simple answers: data don’t speak in the kind of black and white Boolean estimations that we seek in our simplistic age of impatient soundbites. Science is only about finding repeatable patterns to identify as trustworthy knowledge. The uncertainty in identifying them is the obstacle that trust hopes to short circuit. In the NLnet project, I looked to Wikipedia as a place to start: it’s one of the few open sources of online information that show humans working alongside one another to produce a result (or not). Surely in this archive of process we can learn about trust!

The challenge one faces is this: clues about agents’ behaviours do not arrive as a simply curated one dimensional stream, like a telemetry channel or a classroom training set. We have to assimilate multiple signals in the thick of experience. When humans speak, we use words, body language, metaphors, indirect references to the environment, habits, etc. Assimilating these many different signals is not something an “AI” has addressed yet. The best applications can use one or two of them at best. We tend to begin with words as this is the best technology we’ve come up with for information encoding. In Wiki editing, there is text produced about some topic, but there is also a meta-text discussing the changes and related issues. Signals of change aren’t only expressed in one language at a time. When we observe a process record in online data, it speaks in many tongues. Think of Wikipedia (which is the object of my study). The evolution of the platform involves:

  • The grammatical narrative language of the subject matter.
  • The fragmentary edits of changes.
  • The exclamatory signalling language of meta commentary about the edits.
  • The formal signalling language of markup and presentation (emphasis and framing).
  • etc.

Linguistics is the study of patterns to which we attribute meaning. We think about language as something human, but all processes express changes, and their utterances may be regular like a language, as Chomsky showed. All this applies to every forum where humans meet. Wikipedia is only one useful source of data.

Viewed in this light, behavioural analysis needs a prism for unravelling hidden channels of meaning. AI folks sometimes refer to them as modalities when they refer to different sensory types: video, audio, text, etc. But even text has different modalities. We use natural language in a variety of “natural” ways. Brute force methods begin with a data prism, which tries to classify and separate different aspects of a signal. It seems like a good idea, but the cost is high.

Written language may be the carrier wave, but the signal is the intent encoded within it.

What language models do is look for meaning on a single level: in the text. The structure of the text, the context of the larger debate, etc, all influence the actual meaning when understanding data, but these have to be added in hierarchical layers. All this is very expensive to uncover and comprehend. If we could trust something much simpler, we wouldn’t need to verify at great expense! That something turns out to be heuristics.

From whom the data tolls

We associate trust with identity: with entity. In addition to the identification of which language is being used (when to trust), we have to recognize which agent is speaking (whom to trust). Is it a single person, a group of indistinguishable entities, a web page, a version control record, or some other abstract being?

A cursory examination of data tells us that we can’t trace users on an individual basis. Users are often anonymous, and those who give their names change their identities at will. They might lose their credentials and start again, or intentionally conceal their identities. Promise Theory tells us that we can’t trace the fidelity of intent through individuals in a cooperative chain, each peddling second hand data from its predecessors: only about the last agent downstream in a chain of custody. We can’t possibly know the motivation of one agent based on what the next does. For the wiki editing, we can only see that most users contribute only once or twice to an article. The users who contribute a lot are rare, but identifying them is expensive because to identify them we need to track everyone and eliminate the predominant mass who don’t, which is a huge number. It’s probably just not worth the cost. This alone is one reason why we trust. Tech companies don’t fear this cost however. Rightly or wrongly, they keep vast amounts of data at great expense. It enables this study, but the value to the platforms themselves is quite questionable.

Number of users related to log rate of contentious behaviour. Few users are repeatedly contentious, but why? Because they tend to change their identities or disappear!

The figure above shows how most identifiable contributors to Wikipedia make little trouble in editing, not because most people are good, but because they change their identity or never return. The amount of contention (in-fighting) over process is quite constant, but people hide from the responsibility of owning up to it. Discovering those who contribute reliably involves remembering who doesn’t too, which is dominated by junk data! It’s expensive.

Another source of regularity for detecting deviations from the norm, discovered in the early days of CFEngine research, was the pattern of weekly transactions carried out by busy services (which Wikipedia qualifies as) when traffic is dominated by a small number of timezones. See the figure below. We see an interference pattern like a prism pointing to the work done during the days of the week (Monday to Sunday). Signals like this are easy to learn if you know where to look, but they don’t tell us about individual behaviour.

Contention levels between users classified by time of week. Alas, this signal is buried in noise in realtime and can only be seen by careful long term interferometry.

Heuristics and trust keep us moving

The greatest challenge in applying learning as a strategy for knowing what to do is whether one can use what one reliably observes to inform us about the likelihood of future benefit or future harm. Many patterns (like day and night) are easy to learn but they don’t tell us about the intentions of friends and adversaries.

Buried amongst the noisy signals, from a variety of different sources, are a few obvious signs that historical experience tells us are important. They might be handed down by society, parental nurture, curated training, etc. We find them trustworthy and therefore we act on them. If we smell smoke, we suspect fire. If we see stammering, we suspect a lie. But large scale signals are not statistically significant in any scientific sense on the scale of an individual, because they may be based on sparse evidence that takes generations to assimilate. There is a separation of scales in causal behaviour. But not all is lost.

Condensed from ages of data are rare occurrences that don’t make statistical sense on the scale of normal data, but which nonetheless make sense on a meta level. They span the normal timescales we can measure rationally. These are what we call heuristics. They seem irrational, yet they often work.

Heuristics are handed down to us from scales greater than ourselves. Humans are good at finding them, but data science isn’t. Surely that’s what big data and AI are for! Indeed that’s why AI still uses human farms to recognize heuristics and teach them to models. Another way to look at it is: AI trades a snapshot of data for a post hoc insight in a burst of catching up with the data. The work involves a huge amount of energy to speed up time. Evolution is always (but very slowly) in the moment. Its cost is spread over generations. For humans, learning heuristics is cheap and slow, but we gradually inherit the results. For AI, researchers want it all at once–which is expensive.

Whether the effort of either is worth it is only known later, but one could say that AI duplicates the effort of evolutionary selection, because it needs to be trained to align with it.

What trust does is to allow approximate potential to be used in place of more precise reasoning. It’s cheap, but it’s a great start to be followed up by something more expensive only if it seems promising.

Which approach is best?

To see what trust does, we can try to evaluate the value of data trails left in online platforms. As part of the study, I looked at Wikipedia data. Wikipedia’s goal or promise is to produce quality text–but how? It trusts users to a large extent to figure out that in ad hoc groups. Someone will add some text, then others who mistrust will come to police it. They might change it, change it back, edit it, etc. Eventually, the text reaches some state of equilibrium. Trust but verify, until you can’t be bothered to verify anymore. Wikipedia trusts this process: it doesn’t get involved too much–it leaves the process to play out.

The question is this: can we figure out which contributors and which articles are trustworthy? We hope that the platform automation could figure this out, because then we might have a solution that every platform technologist on the Internet could use to avoid alterations and “bad actors”.

If we try to understand this process from the data trails, finding a signal is a surprisingly difficult thing to do. We can see what people go, when they seem to trust and mistrust one another–but because those assessments are not rationally made, we can’t tell whether the decisions were correct or not (whatever that may mean). This is why we tend to cut through the issue in democracies by voting for an answer–right or wrong, voting will break through the deadlock of indecision.

One doesn’t have to work for too long to realise that the amount of data one can collect about each single article is enormous. It far outweighs the information in the result itself. Analysing it takes hundreds of times more effort than does reading the articles. So, could we do something cheaper? Something like trusting?!

The data show no clear correlation between alignment of user intentions and the number of altercations, because simple agreement is initially rare and tends to come from attrition: giving up a fight rather than winning!

Without fact checking an article users are focused on, we can only try to look at the interactions between users to learn their level of agreement. In a fine Western democratic way, we assume that a majority will rule and yield the truth by voting. In other words, the outcome (whatever it is) is the right one–or at least the one we deserve! We disconnect from detailed attention to process and remain alert only to overt signals of trouble: profanity in the comments, trashing the work of others etc.

What we find then is that heuristics are the natural partner of trust. Heuristics are partial evidence that we learn to attribute meaning which far exceeds what would be justified in a purely rational process. For example, we can look at the change comments. Common phrases in the commentary are “too this” or “too that”. Some are accusations of “unsourced” information. A judgement of “unhelpful” or “unfair” is rarer, but we know what it means: someone is upset. Impositions and accusations rule! A variety of profanities also crop up, though not as regularly as one might think.

It’s interesting that we give much sway to accusations that are often emotional in nature! These heuristic signals tell us far more than any rational statistical measure of intent that succeeded in narrative summarization earlier in the study. They are formally impositions, according to Promise Theory, and they heighten our sense of mistrust to make us look closer.

So, we can try to look for these words and signals learnt over “evolutionary time”: signals that tend to imply suspicious behaviour. This is cheap per unit time and in the spirit of a trust methodology. If nothing specific, they tell us when we should pay greater attention. It’s not a guarantee of accurate prediction, but there is the problem: we’re looking for certainty where none can be found. Language and process are so complex at this level that only signals generated by participants could possibly understand the behind the scenes goings on and signal them. They might, they might not. How can we be sure? We can’t. Should we double down on paranoid inspection or trust that we’ll figure it out if something bad happens?

What trust does for us is tell us how attentive we should be.

It’s AI, but not as we know it

Because machines are simplistic and repeatable, but also hard to understand, we tend to trust them. Because AI is a kind of machine, many tend to think it must be predictable. It’s interesting to see who is willing to trust AI and who isn’t. We are not always good judges of what technologies are harmful in the long run. When we ask a machine or another person whom we can trust, we are simply delegating, hoping to engage someone else’s costly resources and save our own.

If brute force learning is no better than heuristics then why waste the energy and cost? The old CFEngine approach of running context assessment into a set of symbolic flags works at least as well as a brute force crunching of language and it works in real time. One looks for simple signals and trigger words, states, and thresholds that we expect to be warning signs of good or bad. It’s then left to individual policy to respond to them. The last word in trust is always the receiver. It must be so for any autonomous agent, whether human or machine.

Promise Theory predicts that trust has its roots in intent. We can measure intent by looking at whether events are aligned with the same intent. It sounds a bit abstract, but once again we can use heuristic signals to detect that. For instance, on the Wiki platform, if users undo one another’s changes, that’s a sign of misalignment. If they only make small corrections, then they are basically aligned. See the figure below for the extent of alignment over all users.

The figure below shows an overall rate of heuristic alignment between users, measured from editing changes as signals of altercation. One sees that users prefer to align (right) slightly more than they prefer to be deliberately opposed (left), but most are in between to some degree–some leaving well alone (middle).

The far right is maximal alignment where users only add to one another’s contributions. In the middle, contributions cancel out on average, and on the left users are aligned against the intent of the article itself by deleting more than they add. The graph indicates that users tend to favour extreme positions: all in, randomly neutral, or all out, with a slight bias in that order.

Forget the fish, it’s all about the river

Trust is an approximation that hopes to save fruitless work.

Is there still a role for expensive machine learning? If we can’t depend on precise signals to condense into a clear message from online text, what can we use learning for in our Internet age? One answer is to forget about the passing signals and just measure their flow. This is basically what we do in physics with energy. By looking at the right level of detail, we can still achieve a limited and idealised predictability. A common currency allows all currencies to be exchanged and be useful.

The purpose of learning is pretty much summarised by the semantic spacetime hypothesis: it’s a way of understanding normal from abnormal, discerning scales of measurement. The problem we have in basing trustworthiness estimates on statistics as a form of assessment is that the more specific the data the more sporadic and inconsistent are the results, except in the most regular relationships that we expend the greatest effort and time on. Most agents receive patchy and inconsistent coverage. This is why heuristics are the key to trust.

Suppose we compare learning to immunology, as I did many years ago seeking inspiration. Bioinformaticians might believe (or at least hope) that the protein spikes on cells can distinguish every one uniquely, just as we hope that our genetic makeups are unique and offer precise answers about our nature. Neither of these things is true. The immune system is triggered not only by detection of “non self”, rather it starts from the heuristic signals released by necrotic cell death. These signals are much easier and more abundant to detect than the protein structure of a specific virus. They can be triggered by false positives, leading to allergies and autoimmune problems. Later, when things get more serious it becomes more economically viable to flood the blood with more semantically precise countermeasures (B cells), police (T-cells) and garbage collectors. But that takes time. It’s not a trivial operation by any means, but its dynamics are easier to trace than what goes on in a nervous system. The chain of reasoning begins with trust (optimistic acceptance) of heuristics because filling the blood to capacity with sufficient policing forces to find a precise match would suffocate the host far quicker than any pathogen. A separate process in lymph glands does that more slowly (over days rather than seconds).

In immunology, each potential recognition is noted but switched off until a secondary confirmation is received. The coactivation of signals is a kind of counterfactual strategy. The benefits of extensive machine learning of similarly “unactivated fragments” (analogous to protein signatures in immunology or phonemes and patterns in linguistics) are marginal. They have high cost but they remain almost useless for detection because they take too long to confirm. If we can win the time by responding to heuristics as a first approximation, we might still be able to learn episodic patterns for the long haul. These take the form of highly compressed stories we can compare to. Assembling these stories is a high cost, high value operation. It’s how we get our amazing cognitive skills, but they are only useful if we have time to use them. If we can trust in our safety and advance to a peaceful coexistence with our surroundings.

Mathematically, we can ask the question: imagine a feature space of evidence about the world. Assume that it is causally complete, in other words that it holds every bit of information that contributed to a certain outcome. We assume that every answer is a vector in this space and its history of evolution. Every answer involves a set of components in some complete basis. A coarse grained answer is one in which the components are grouped together in a smaller set in order to average away some detail. A heuristic answer is one in which we perhaps throw away a lot of the components and say something like: if variable component x passes some ad hoc threshold then we trust and inference about property X. This is trust because we explicitly decide not to look in more detail. Heuristics are a trust strategy.

Worship the toasters?

Could your phone tell you whether to get into that taxi in a dark place you didn’t know? It’s not beyond the realm of reason, but is it desirable? We developed these instincts for self preservation. Do we really want to give that skill away?

Regulators and other Trusted Third Party services are sometimes used to monitor and offer guidance over well known services, but they aren’t infallible. They can be spoofed and manipulated in a variety of ways, and they typically belong to national bodies that operate within local laws that may or may not meet our expectations in a global society. Trust is about self-protection, so adopting someone else’s judgement is only the same as trusting by refusing to consider oneself–head in the sand.

AI has the potential to shift burdens by offloading cognitive work from secondary data channels onto machinery. It’s something like a “credit check” that financial institutions use on us. In practice they can be either useful or prejudicial. With greater brute force per unit time, offloading this discomfort feels like a win–but it’s not cheap. It just replaces one cost we don’t like with another we feel we can bear–mainly because it’s harder to see. Does it get us anywhere at all? Doesn’t it just advance the inevitable arms race of human skullduggery for bad intent?

Perhaps the answer for society lies not in rigorously detecting trustworthiness, like some police state, but in investing in the prevention of skullduggery itself–by mitigating human desperation and focusing on a sense of purpose for all. The research clearly shows what brings people together and what drives us apart. The scaling of agency through governance is still our greatest invention. Don’t expect AI to make it redundant.

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

No responses yet