Trust and Trustability

How Promise Theory tells us trust is really an economy of attention

Mark Burgess
10 min readMay 11, 2023

“Trust is one of the most commonplace ideas in our lives, at the root of almost everything we do, and yet it remains an elusive idea that we hand-wave about and even dismiss as part of the moral mystique of the human condition.”

So begins a summary of the Promise Theory of trust, that I’ve worked on as part of a project to understand the role of trust in our modern cybernetic society. Trust is upheld as one of the major challenges in the Internet age, where new power tools for deception and broadcasting have been handed to Everyman as part of a dream of freedom (freedom to do, but not freedom from harm).

In this short summary I want to try to explain how trust is really about how much effort we choose to put into watching over a process designed to keep some kind of promise. It’s a measure of our anxiety, perhaps, but based on some assessment made with (often) incomplete information.

The literature on trust is enormous, but almost none of it attempts to determine what role trust plays in life. That’s something researchers have taken entirely for granted. They focus rather on what leads people to trust or mistrust, as if knowing that is simply useful. But if we don’t know what trust “does”, why should we care?

The semantics of trust, confidence, and risk

Promise Theory

Ironically, the new Promise Theory of trust tells us that taking something (like trust) for granted is a naturally trusting reaction. We would rather trust trust than expend effort to understand it or hear about what it is. According to the Promise Theory, trust is a counting currency, something a bit like energy, that we use to put a number on work. In this case, it relates to the effort to obtain information — not just any information, but information about whether everything is as it should be: whether a promise is on target to being kept. Unlike physics, Promise Theory is not only about dynamics. It also adds the question of more detailed semantics of information (see my book In Search of Certainty for more details about semantics).

Trust and Promise Theory have had a long association, going back to 2007, when I was seconded to a retreat in Germany with my friend and collaborator Jan Bergstra to work on Promise Theory as part of an EU research project. Back then, we didn’t have much of an idea of what trust was either. Like the majority, we assumed we knew what trust was and were more interested in exploring the underpinnings of trust together with promises. But we knew the easy part: there’s a relationship between the extent to which something keeps its promises and our level of trust in it. I was tasked to return to this question recently, after a long period of being pulled in other directions. In the interim, it turns out that I’d built an intuition for how to approach trust from a deeper level.

A few authors have discovered that it helps to distinguish two elements of trust: a part that we call trustworthiness, which is an assessment of reliability, and another part which is an investment in a relationship, where we decide, simply how much we trust. Indeed, they aren’t quite the same thing. Clearly trust is related to the reliability with which promises are kept. The idea becomes more interesting when we compare trust to the role of energy in physics. Promise Theory shows that there are parallels between the way we can describe intent and its dynamics, and the way we describe change in physics. The bottom line is that the two kinds of trust behave a lot like potential energy and kinetic energy. We can call trustworthiness potential trust, because it indicates the attractiveness of an agent’s behaviour to other agents. Similarly, we can associate kinetic trust with the amount of effort an agent is willing to expend on watching over the keeping of a promise once it has established a relationship with another. See figure 1.

Information Theory

Observing (monitoring and checking) is a problem in information theory. Any time we watch over something, information is sent from a source to a receiver. Observing once is usually not useful, because there could be an error. Uncertainty drives us to keep looking. The sender sends as often as it likes and the receiver samples the channel as often as it likes. If it samples rarely, it can miss information.

The Nyquist frequency is the rate at which the receiver should sample to catch all the information being sent. What does this have to do with trust? Clearly, if the receiver is happy that the information about the state of promise-keeping is unimportant to it, it can live without it. Sampling and checking all the time costs work and the receiver may have limited resources. If it believes the outcome will be ok, then it doesn’t need to keep checking, and we say it trusts the process (and by inference its source).

Whether or not checking is important is a matter of policy. It’s an individual judgement, based on circumstance. It’s not something that can be derived in a deterministic way by a formula, for everyone–unless, of course, the receiver is always a machine whose decision making has been preprogrammed to evaluate a simple function. Also, it’s nothing to do with the identity of the source. We don’t trust someone because of who they are; we trust their intent to deliver. Trustworthiness is an assessment could be automated by monitoring the promise-keeping history of the agent, and then be associated with identity, but only as far as the sampling rate allows it to see! Simple agents use simple algorithms, but more complex agents will have to balance multiple processes and issues in making assessments.

The basic sampling issue is easy enough to understand. Think of a security camera that sweeps back and forth across a yard at a certain rate. If a thief is quicker than the camera, he or she can run across the yard while the camera isn’t looking without being detected. The same is true of any observation process. This is the meaning of the Nyquist sampling law.

Figure 1. Trustworthiness and antitrust form a total energy concept for effort with semantics of promise keeping

In this picture, trust turns out to be about giving agents the benefit of the doubt, by not checking up on whether it keeps its promises or not. You can interpret this as a moral kindness if you like. More likely, it’s just economics: we want to save valuable effort. You see, the opposite of kinetic trust (antitrust) is attentiveness. We know this in technology as monitoring or in finance as auditing. Attentiveness from receiver to source signals mistrust. Attentiveness in keeping promises from source to receiver builds trustworthiness and tends to increase kinetic trust. Trusting may thus introduce a kind of perceived risk, but it’s a policy decision, so the agent has basically decided that on purpose.

The law of lazy attentiveness

Trust is a policy for deliberate inattentiveness. Mistrust is proportional to our sampling rate for checking up on an agent. Trust is therefore a strategy for saving effort, given finite resources. Attention has to be shared between different processes and promise relationships, so the exchange value of trust plays a role in shared cost saving. Conversely, mistrust gets turned into a policy for micromanagement or “busy waiting”, endlessly and impatiently interrogating a promise channel: did you do it yet? This is the kinetic aspect of trust: it accounts for sampling velocity in the direction of a particular promise.

The Nyquist law tells us that too much effort may simply be wasted. Sampling saturates a channel at a threshold frequency, so a mistrusting agent would do well to learn to trust at the Nyquist frequency or below to optimize its expenditure of effort. Below that frequency, its estimate of promise reliability might be less accurate, but that will not affect the rate at which a promise is kept, because that’s entirely the domain of the sender. Trust is a question of monitoring! This obviously has important implications for our cyber-physical world.

This explanation of trust as deliberate inattentiveness leads to a simple hypothesis, which fits easily with common sense. Agents will tend to trust the coarsest or least detailed promise more than promises that are extremely detailed, because the latter costs more to assess. We tend to trust people who speak in simple terms, even if they don’t really give good answers, and we tend to distrust people who speak in a complicated and detailed manner. Politicians make vague promises successfully, but rarely win based on complicated and accurate claims. It might be ironic, but it seems to fit the facts.

Confidence and risk

Digging in the literature, one quickly sees how trust gets muddled together with some related concepts: confidence and risk. These are related to trust but they have different semantics.

If trust is about asking “are we there yet?” about a promise, then confidence is about asking “could we ever get there?” Trust is an assessment of intent to try. Confidence is an assessment of likely capability to succeed. It’s a subtle difference, but has important consequences, that I won’t go into here. When an agent has low confidence in a promise being kept, it may become “hopeful”. I trust (the intent) that the bus will come, but I’m not confident in its ability to get me to the church on time (that might depend on exterior factors)!

Some authors have claimed that trust tends to increase with vulnerability. Promise Theory says this is wrong. Instead, there are similar concepts that come into play when an agent risks a loss by depending on a single promise. When we are exposed to an outcome and risk a loss, we are vulnerable and may have no recourse other than to hope. Hopefulness is much the same as mistrust, according to this promise picture, because–in the absence of some redundant alternative–the receiver can do nothing to improve its odds to usher forth an outcome, except to watch even more often. It’s a bit like repeatedly trying the ignition when your car doesn’t start, or kicking the TV to get a channel to work. An agent can impose on the sender, which is what we call blame. These are useless strategies, but we try them in case the sender is merely sleeping, because we haven’t arranged for failover alternatives to hedge the risk of no outcome in the larger network of agents.

Promise Theory speaks about promises as steady state relationships between agents. It also talks about “impositions” which are transient events. It has long been observed (by me anyway) that impositions tend to reduce trust. This applies to uninvited and irregular behaviours, like scheduling zoom meetings without asking, as well as manifestly negative things like blaming with accusations and other kinds of attacks on the autonomy of the other. Impositions are attempts to induce cooperative behaviour and cause the other to do work. Naturally, it will tend to focus more attention on the attacker.

To cut a long story short, trust is useful because it’s more easily exchanged and traded like a common currency than confidence or risk, which deal with specific outcomes. Risk is often tied very closely to specific failure modes, and it deals with events that are very far from being predictable, so it can’t really be assessed quantitatively for the important cases. Just as trustworthiness can be passed on as an assessment of reputation, reputation can be aggregated back into an assessment of trustworthiness, in a feedback loop, so trust policy can save further effort by averaging away detail (coarse graining) of assessment. Trust, then, can use a rough estimate based on one promise to bootstrap another. That’s why trust is such a useful tool for minimal scrutiny.

Cybersecurity detail

Like any attempt at scientific understanding, a lot of nitpicking of details is needed to get it right: antitrust! We could save that work and simply trust that it’s right, but that’s the opposite of science. Science is an antitrust method, whose aim is to clean up its act so that it can be as potentially trustworthy as possible. A couple of years ago, I wrote a short book about risk and trust for some developers in cybersecurity, with my friend Patrick Debois. Only a few readers understood the points. They wanted a simple answer and already found our notion of risk complicated. Now we finally have a detailed answer of what it means and how the economics work!

In IT, people really don’t want to deal with trust. We are used to expending any amount of computing resources to check stuff, but that’s not the effort we need to expend. The problem in security is that we can’t be bothered to expend our own effort to check on promise keeping. Often we can’t even be bothered to formulate specific promises! We rely on generic monitoring and throwing tasks at technologies like crypto to “just handle it”. Many IT folks are looking in the wrong place. Trust relates mainly to the work of programmers in checking their coding, not so much the effort of CPU cycles expended. Mostly, we rely on trivial identity checks using crypto tokens. If we can verify plausible identity, we “trust” a party from there on.

Technologies like blockchain that claim to involve “zero trust” argue that by making everyone exert maximal effort to duplicate each others’ efforts, one achieves a kind of statistical certainty. The flaw in that argument is that trust is merely shifted from a single agent to reliance on a common source of software. The blockchain greatly oversamples the channel, but introduces new uncertainties that aren’t checked. Of course, we can do better.

Postscript

We crave certainty, but it is in short supply. We would like to have confidence in our abilities and actions, to be able to predict successes with certainty, but this is very difficult if even at all possible. Instead, we make do with a rough estimator called trust.

In a world where agents live in relative safety, i.e. in which the impact of undesirable and non-cooperative behaviours is small, trust is a great cost saver. But in a non-linear world of large and sudden surprises, such as a technologically enabled, highly connected, and heavily weaponized world, then trust may not be worth it. This is what we need to figure out.

If you want to know more about these details, dipping into your trust reserves and expend work on it, you’ll find the detailed arguments in a project sponsored by NLnet.

[PART 2]

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

No responses yet