Trust and Trustability, part 2

How future society can evaporate in a puff of autonomy

Mark Burgess
18 min readMay 23, 2023

In an earlier post, I discussed how Promise Theory offers an alternative picture of trust–-not as the usual moral imperative for human behaviour, but that of a coarse potential landscape, shaping decisions and relationships, more like something you would find at work in the physical sciences. This second part of the summary and discussion attempts to think about the implications of this understanding of trust to shape our modern worlds, which now include both real and virtual places in games, online forums, and augmented reality.

In Promise Theory, trust emerges as a two part conception that concerns the way agents assess and allocate resources to decide how much we can rely on one another. Trust comes in essentially both potential and kinetic forms (passive and active, if you prefer). We call them (potential) trustworthiness and (kinetic or attention) trust. Together these form a coarse, non-local assessment of influence, which acts something like an energy in physics or a money in economics, only with somewhat more specialized semantics. The parts of trust apply to all processes and thus (by teleological inference) to the agents engaged in the processes too. Trust is also related to (but distinct from) the notions of confidence and risk, which won’t be discussed here.

The role of trust in a cyborg era

What would a pragmatic model of trust look like? It would have to be something we could use to evaluate, adapt, and redesign the systems of society where needed, by understanding their causal implications. The societies of the future are to be both real and virtual. They will exist in gaming, virtual reality, and our hybrid physical worlds, augmented by sensory technology. To reveal this model, we have to stop assuming we know what trust is and putting all our efforts into building it or eliminating a reliance on it, and rather think about what trust does.

Some tend to think that if people don’t keep promises and thus can’t be trusted, we can just force them to by legal or even physical action. However, Promise Theory suggests that this only further undermines trust and future relationship.

Suppose what trust does could be viewed as a kind of guide rail, shaping the trajectories in agent interactions. The question to answer then would be whether trust and trustworthiness alone could act as a dominant causal factor in shaping behaviour, on the scale of agents and their groups.

Promising Digital Risk Management: What not to do in Cybersecurity

Trust as abstinence

In the social sciences, researchers ask the question: do you or don’t you trust someone? Then they then will try to correlate the answers with behaviours of people or groups, so we never learn what trust is or what it does except by implication. Computer security engineers, by contrast, have appropriated a rather sketchy idea of trust as a human flaw, a weakness in human-computer systems that makes them vulnerable to attack. They look for a quick test to establish credentials, which can serve as a proxy for trust, and then try to sweep the issue quickly under the software rug, with a concrete alibi for filtering trustworthy from untrustworthy.

As humans deal with one another and with machinery in the 21st century, trust takes on a new level of significance for mapping out possible misfortunes and safety concerns. Today these include issues not only with finding reliable services, but also with hacking, stealing, espionage, "AI" related fakery, as well as reliance on technology, and its environmental impact. Current ideas about trust seem much too simplistic their approaches, and are sometimes more political than practical. We need a more nuanced and functional understanding of what guides agents–something we can incorporate into the technologies and systems we build as proxies for our own actions.

The simplest way to think about trust is to begin with its opposite: mistrust or antitrust. That is the active end of the topic. Mistrust acts as a trigger, which unleashes the busy work of verifying whether some service process is going according to plan (or at least to the agent’s desire). The first component of trust, trustworthiness, is a passive historical assessment of the reliability of an interagent process in keeping its promise (how safe it is to place a bet on an agent’s intention to deliver on its promises). This then informs the second component of trust: a choice or policy about how much kinetic work or antitrust we will invest in monitoring the outcomes as they progress. If we choose not to engage, then we don’t need kinetic trust at all.

Trust therefore has two components: (potential,kinetic) = (history, policy).

Antitrust is about allocating attention or time resources to the oversight of promise-keeping processes, and trust is the complementary view of foregoing such an allocation of oversight. Antitrust is thus a bit like work and energy: mathematically, both of these are complementary variables to time, measured in units of activity. It might be wrong to call trust an enabler of behaviour, but it can perhaps be considered a precondition to establishing a causal reliance on a promise.

Assessment of other and of self

Agents assess one another to manage resources by planning how much attention should be invested in watching over the supply channels of their dependencies. Assessing the quality and quantity of past promise-keeping gives a notion of trustworthiness, i.e. reliability. On the basis of this assessment, new promises may being accepted, where an agent chooses whether to gamble on accepting a new promise from the same source to deliver something else. This negotiation is a handshaking dialogue in which trust plays a role.

Sadly for software engineers, there is no plugin or algorithm that can “deliver” trust as a feature, in the sense it is usually discussed. No one can sell you trust. Trust can’t be installed or handed to us, even by agents' actions and credentials. It’s purely a policy decision made by every autonomous agent individually.

We might associate (even correlate) trustworthiness with rank or position or authority or familiarity, but ultimately nothing can force us to direct our attention resources to allay doubts about relying on another. All we can do is be trustworthy in keeping our own promises, by exhibiting stable reliability in our own behaviours.

The First Law of Promise Theory (locality): no agent can make a promise on behalf of any agent other than itself.

Earning an assessment of trustworthiness by another agent has its own cost: the agent concerned has to put in the work to deserve it. That requires a level of effort or diligence, which subtracts from other expenditures of work. So, trust and trustworthiness ought to go hand in hand — what we save on one task, we can spend on the other.

Think of an IoT example: do you trust your home electronic assistant? When you say: “Hey <favourite-branded-company-name>, turn off the bedroom lights!”, you begin by asking: is the agent behind it trustworthy in keeping this promise? If not you will likely go to the bedroom each time to check whether it worked. If not, you’ll learn to assume the outcome and grow fatter on the sofa.

The reckoning is this: as we use more resources to ensure our own outcomes are reliable, we have less of them to spend on micromanaging and checking others–so we would have to become more trusting of them. There is an apparently virtuous reinforcement cycle in this reasoning, which sounds appealing, but it has nothing to do with being morally good. It’s just a little resonance implied by the finiteness of agent resources. That could be an emergent outcome. It could just as well go the other way. As agents become larger, they can trade resources to avoid this virtuousity. They can enter into trust borrowing and debt.

Trust seems to be a preferable option for encouraging certainty and intentional behaviour, but all-consuming attention is also a highly addictive trait, so we can’t take for granted that offering the benefit of the doubt will prevail on its moral values alone. One only has to look at the continued popularity of overpriced monitoring solutions (mostly useless), and at stalkers and peeping toms, to grasp this. Humans have a strong desire to know.

Coarse graining currency exchange

In a human-technology world, we’re interested in this question: should we rely on some unknown agent or not? Now it gets more interesting, because human assessment is not at all like machine assessment, so we have to think of the two differently.

Humans think expansively, and can convert the currencies of trust between different promises freely: borrowing and debt may feel natural to us. This fits with the coarse graining hypothesis, which suggests that we favour vague generalities compared to detailed reasoning. We are also fickle, apt to change our minds quickly. Imagine if someone expressed a sudden and unexpected religious or political view, or wore a T-shirt for the wrong sports team, this might suddenly produce a large change in our assessment of trustworthiness in some scope.

If a source is associated with a company we don't like, or expresses a mistrust of us, that can influence our assessment of them in return, even though there is no requirement to do so. This mixing of information from different causal threads is both a strength and a weakness of human cognition that machines don't naturally do.

Machines are generally much more simplistic by design. Logic is, after all, only a simplistic razor for a kind of artificially sharp assessment.

Religious "faith" goes so far as to propose the antithesis of oversight: absolute trust, no questions asked! Have a happy life by not doubting at all, or asking questions! There’s an apparent imperative to save that effort by trusting as much as we can from the least information. Trust makes life easier, but it also opens an agent to exploitation–which religion and its later cousin politics have excelled at.

The very cost saving nature of trust is a weakness when it comes to rational scrutiny. Its attachment to coarse judgements means that humans are more attracted to everything that is less deserving of belief, in verifiable scientific terms. This is surely a challenge to reckon with, and it's not easily modelled by machine systems.

Data all the way down

In our modern age of data cravings (extreme antitrust), we have all kinds of technological tools to help accelerate data gathering and processing. The price of antitrust appears, at first blush, to have become much cheaper, but the density of decisions has grown too. The market is full of antitrust providers. It’s all an evolving arms race.

In order to employ an advisor, or a tool, or a service to help us scale our assessments of others (e.g. a human resources service, or broker), we have to trust that tool not to lie to us, or we have to expend the effort to watch over it too. If we trust the watcher, then why don’t we trust the original? Because humans can allow the criteria for trust can be different.

As we seek trustworthiness, an assessment of trust in one thing can be exchanged for trust in another. It’s a marvellous sleight of hand, and sometimes very practical! We can try to get out of a blind alley by shifting our reliance from one source of truth to another, even though each indirection formally adds to the mathematical uncertainty about the answer, as well as the semantics of what we are actually trusting.

For instance:

  • If you don’t trust the banks, you could move your trust to blockchain and cryptocurrency. Which is the more trustworthy?
  • If you don’t trust visitor arrivals, you shift your trust to a credential system of passports issued to people you do trust, but these can be forged.
  • Physical scientists and engineers will tend to mistrust any claim that scientific assessments can be made if they are not quantitative values, but if machine learning has taught us anything it’s surely that the criteria for assessing and matching inputs are not all simply quantifiable with the kinds of numerical estimates we’ve been able to use in more elementary cases. Symbolic and extended pattern recognition blurs the boundary between quantitative and qualitative.

Every filter is imperfect. Which do you trust the most? It depends on your perception of the risks.

Confusing trustworthiness with human values

It’s common to confuse dislike and disagreement between agents and promises with our notion of their (un)trustworthiness. If someone withdraws a promise we like, this does not count as them not keeping the promise, and so it shouldn’t contribute to a direct assessment of its trustworthiness, at least according to the Promise Theory picture. The fact that we would like someone to promise X doesn’t imply that they would or should. Trustworthiness is about continuity not type of behaviour. If they are constant in their intent, the process has a kind of momentum and we assess they are trustworthy to keep doing what they do.

Any sudden change in a promise would be a transient break of continuity (an imposition in Promise Theory terms) if it was unplanned, and this alone is enough to lead to a small negative trustworthiness score pertubation, according to the theory. Ultimately, the imprint of this untrustworthiness in the agent will recover from such a perturbation, even when the relevant promise is long since moribund.

Constancy, then, is what leads to reliability and trustworthiness. If something is constantly bad, we can trust it to be so in the future too.

The temptation of the modern world is that the availability of powerful surveillance technology seduces us into mistrusting more, even when things are stable. We can now automate our surveillance and tighten the elastic bonds of suspicion because it’s so much cheaper to do. It's a drug.

Risk versus trust

“I don’t trust him, he’s my friend…” –Bertold Brecht

Monitoring may only be a means to a selfish end, but would anyone step back from the brink of mistrust to forego monitoring once it had been established? Could we resist the temptation to see and to know everything, to peep like Tom?

For most engineers the answer is no, because conditions may be changing and they want to know that too. There's a bias. Knowing is addictive to humans, but machines don’t suffer from that problem. Winning positive trust tends to be slow and linear, step by step, but resetting it can be sudden and capricious.

Some examples of our uneven attitudes to trustworthiness:

  1. Today programmers consider most software libraries trustworthy. They promise to solve one problem or another, both open source as well as commercial offerings that we are not allowed to inspect. The cost of inspecting seems to be too high for users to mistrust the open source aspect of software. Trust is given by association with the identity rather than the detail, as per the coarse graining hypothesis. Engineers may pay or not pay for something, but most won’t verify the code as long as some level of promises seem to be kept. The idea of using automatic code generation from Large Language Models is a more recent example of trusting without due diligence. It’s unclear where trustworthiness comes from. At this stage it’s perhaps only testing the water.
  2. In the 21st century, the cost of employing someone in a working role, versus the potential benefits are now judged as risk assessments rather than trustworthiness issues. The same seems less true of leadership or management roles, where trust is still a normal criterion, e.g. membership of certain rich clubs may be taken on trust, in spite of the much higher cost of management roles. Thus trust is much higher in the wealthy circles, and seems disconnected from trustworthiness (the track records of leaders are often poor, though also deliberately inscrutable). This is likely a coarse graining trust by group identity.
  3. Social media influencers get their assessments of trustworthiness by gradually attracting followers. Initially, they have to rely on the generality of human association, i.e. on the alignment of their publicly stated promises with an ad hoc audience. A simple message, which is regularly posted and costs little to consume will attract followers, compared to a detailed message that’s costly to consume. The regularity and coarseness of the assessment makes it a cheap option. Later, preferential attachment will accelerate growth by reputation. This means we can take these sources for granted, and we don’t have to invest an effort in checking their produce. Trusting them doesn’t mean we believe everything they tell us, it just tells us quickly how much we can likely believe.
  4. Antitrust is the product of the legal profession. It begins where promises between agents are not obviously in alignment. In contract negotiations, companies will try to limit their risk by imposing lengthy terms and conditions. Imposition leads to lowered trust, according to the PT, and thus negotiations take place in an atmosphere of lowered trust to begin with. If they can sustain enough trust in the process to work through such details, in spite of these negative impositions, then they can build trust around the process itself–of working to address the edifice of risk they’ve created for one another. It’s not the terms that create trust, but the working together to express them.

Can’t impose no satisfaction?

Today, command and control managers may try to ask: what data do I need to get in order to brute force complete certainty? The falling cost of tools for brute force tempts this kind of grandiosity. Alas, it's not a sustainable question to ask.

To avoid provoking an arms race of hide and seek, we need to be asking how we can safely forego the need to rely on huge amounts of data in the first place. This applies at all scales: from single services to society as a whole.

The more force we try to apply to impose success, the greater the likely impact of mistakes. Wishful thinking and hope also promote something like mistrust. They cause agents to try too hard to extract a result, a bit like kicking a machine to make it work when nothing is happening. That ends up costing them more than it should. When the risks and infractions are only small, trust is a cheap option. It creates space for other activities. If we can’t trust, we may be paralyzed by the cost of acting at all.

What if we could simplify the world to make it stable, predictable, and ultimately safe again? Could trust and polite society alone once again guide our behaviours? The dynamical preference social change is thus for stability but not stagnation: for a slowly varying “adiabatic” equilibrium. To avoid mistakes, society must be sufficiently in balance at all times, but a little borrowing here and there can help us to overcome obstacles. Once it is allowed to get out of balance, new arms races will take off and resources are wasted on conflict.

Trust in testing

We have always placed too much confidence in testing (exams, and resumés to seek out and select the best, as well as identity and provenance checks for authenticity). Today, we’ve quickly arrived at a world filled with deep fakes, plagiarism and easy manipulation of data. It throws open huge challenges for education, work life, and even human happiness.

Artificial Intelligence is in the news but it is not the only challenge of trust. It’s always tempting for engineers and scientists to look for new tests and technologies that will uncover deceptions and faults, but we can’t sustain this ramping up of policing and correcting for long. The arms race will reach a cusp. The outcome of ever-increased control would be a virtual police state, which ends with mass malcontent, perhaps even revolt.

Antitrust always leads to an arms race between those trying to cheat on their promises and those trying to catch them in the act. Both sides are responsible for the ratcheting up of wasted effort. We like coarse criteria like identity for trust, but when we can no longer trust identity or provenance to be genuine, we employ try to fancy identity credentials, crypto keys, fingerprints, etc, as identity supplements. When we’re afraid of fake bots, we look for uniquely human characteristics like captcha tests of other forms of profiling. These work until they don’t. If we don’t trust ideas themselves, we censor them. At each level, an increasing amount of energy is expended in handshaking before interactions can take place–all to capture (presumably) a minority in the act.

The traditional alternative to prevention is to hedge risk with insurance providers, escrow agents, credit card companies with their fraud detection capabilities and other go-betweens. This is a form of trustworthiness exchange and borrowing. We ask people to put their money on the table before playing (money is a proxy for trust), to be sure they won’t cheat us in the end. Such methods are protocols for using intermediaries or Trusted Third Parties. They work quite well, for a fee, and don’t incur too many restrictions on our lifestyle–but they tend to work only for a privileged class who can pay for them, and not everyone trusts them.

The cryptocurrency community tried to eliminate reliance on a single trusted agent, like a bank, by “democratizing” the system of payment by taking a vote from many, thus shifting trust in one agent to trust in a system involving many agents. They mistakenly called it “trustless” or “zero trust”, because they didn’t recognize the implicit trust in their own intentions. It was an act of misdirection, of audacious political legerdemain, in which the vulnerability possibly even increased. Certainly the cost increased. But it was a clever manipulation of trust: blockchain was easier to trust for users because it’s harder to verify all its moving parts, yet it builds on the claim of extreme verification by proxy, with associated cost.

Technology allows us to overcome some costs in policing a society artificially, but machines have been algorithmically simplistic until now. The processing and understanding of the results were still limited by human cognitive faculties. Humans can't be pattern-matched as easily as machinery, but modern machine learning techniques now allow us to train vast resources onto antitrust monitoring that rival human subtlety — but without the appreciation of "big picture" context. Indeed, once we judge humans by the same standards as bots and fakes, we undermine the actual basis for making a meaningful contribution to society, and the foundation for society evaporates.

In his book, The Collapse of Complex Societies, Joseph Tainter warned against this escalation. He claimed that the diminishing returns of antitrust micromanagement eventually traps a civilization in a no-win situation of climbing debt. We may finally be crushed by the cost of our lack of trust. Geoffrey West, by contrast, has countered that technology would historically always overcome these traps, from studying the scaling of past costs in societal restructurings–but his argument trusted the data on innovation to save us. If innovation ever loses touch with human needs, stops delivering, or even takes another trajectory for its own edification, we will have lost that gamble.

Dodging that asteroid

Can we place our trust in a safe and happy human-technological future? Of course, that’s a coarse human question, of the kind we like. Trust it to do what? Do we even know what that fictional future promises? Just like those famous monitoring system aficionados, we don’t really know what we want from the future machine, but we want to. So we watch and wait, mistrustfully, occasionally blaming and accusing. We could hope for better.

What will be the purpose of trust in the cyborg age? Trust is the currency of tolerance that helps us to avoid a police state of overzealous control.

If trust is about pitting uncertainties against our inheritied cognitive limitations, then we will only be happy in a society in which we are not working too hard to watch our backs. But the world for which trust was an adaptation was a very different place from the one we face in the years to come.

We find ourselve backpedalling. Societies are re-tribalizing, re-nationalizing, seeking autonomy and "sovereignty" over coherence, because antitrust consumes too much of us now. Our social fabric has already begun to fragment, thanks in no small measure to the technological telepathy we can now use to stay close to small cliques, and push others away. Technicians love to modularize and discriminate by parts: we build microservices, microcurrencies, microhabitats , we like economic protectionism, tariffs, and sheltering behind moats in castles.

Discrimination may form the basis of our cognition and our reasoning, but taken too far it's just disintegration. A forest becomes only a bunch of trees that get in each other's way.

We've entered a perilous new trend of exacerbating differences arise from our addiction to mistrust: identity politics, accentuating diversity (mainly to rub others' faces in it), calling out the right to be seen, the righteousness of autonomy, and grappling with presumed benefits of freedom and self-determination, while ignoring the underlying benefits that coherence brings. All this means we are heading for conflict and disappointment in the year to come.

The evolutionary purpose of mistrust was not to ramp up hostilities, but probably to restore stability by encouraging mutual learning in order to stabilize differences and breed familiarity–to learn to know and respect our adversary, and therefore to become its friend. Coherence is the foundation of every society as well as the underpinning of trust; divorce is only the route to prolong acrimony.

AI — artificial indolence?

Talk of "AI" continues to dominate headlines and seduce many into believing that human roles will be replaced through technological emulation. It's a natural enough idea for supplementing the shortfall in our workforce, when our economic model relies entirely on escalation or growth. But, if we relinquish our watch over the world to a convenient "AI" imposter, whose programming we can’t criticize or adapt to change very quickly, then we'll lock ourselves into a society where "the law is run by a stubborn ass "— and be seduced to sit back and watch from the sidelines as unsustainability drives us over a cliff.

Many science fiction stories have been written about this scenario. Security engineers argue about arms races against a criminal element, but the real enemy is our own apathy. Trust in algorithms built on inscrutable machine learning is worse than the stubborn ass of human law, because it has is no public court of appeal.

This issue is now affecting our engagement with everything from democratic elections to the schooling of coming generations. As we accept this, even welcome it, we edge closer to the precipice of a new Dark Age, and becoming an ignorant society of Eloi living above computerized Morlocks.

Disaster doesn’t have to be inevitable, if we would only prioritize global coherence-not the tilted globalization of the colonial eras, but an equitable promise of shared interests that includes survival for all through sustainability.

Trust and mistrust in “AI” is a growing narrative today, polarizing us certainly. "AI" has given a name and a face (a coarse identity) to a much larger issue about blindly trusting things we haven’t understood. I think we have no particular cause to worry about its methods, which mainly teach us how to know our human cognitive faculties better. We should rather be afraid of our own willingness to surrender to a system of economic growth that asks us to give up the elements of humanity society needs to sustain it–it might not be ours forever.

It’s our own lassitude we should fear the most.

--

--

Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org