What does trust have to do with security anyway?

Balancing priorities in the cyborg age

Mark Burgess
10 min readSep 20, 2023

In a series of articles about our modern Cyborg Age, I’ve argued that our traditional understanding of trust is flawed. We’ve misunderstood its role as a moral imperative, when in fact it seems only to be a rough and ready predictor of attention costs (attentiveness to reliable outcomes) and benefits for processes involving our interactions with others. Evolution has equipped us with a low cost cognitive assessment. To use it properly, we need to rethink its use in the modern world. No doubt, some will prefer to hold on to the moral view of trust and disagree, but then one remains stuck with the usual dilemmas. There are solid reasons for wanting to understand trust better.

Trust and Security

If you work in IT or even in business, you might be forgiven for thinking that trust is the mortal enemy of security. This is simplistic and misleading. Most of us regard the old Cold War adage, “Trust, but verify…” as an ironic truism, but in fact what it really teaches us is something about the dual nature of trust. Trust has two parts: the assessment of trustworthiness and the decision to watch over and monitor our dealings with others, i.e. to manage our attention.

It requires only a small leap of the imagination to see that trust is an attention regulator, using the currency of mindspace or invested work as its currency. I believe the http://markburgess.org/trustproject.html has now shown that this view is fully consistent with all previous interpretations.

Trust has two parts: an assessment of trustworthiness and a policy for giving one’s attention to verify outcomes. Trust and verify. These are independent decisions, not at all mutually exclusive.

As I argued in detail, trust plays a role something like that of energy in physics: as an accounting tool for governing and tracking processes. It has both potential and kinetic forms. Potential trust is a guiding summary of the historical past. Kinetic trust is an attention management mechanism for the present. At least that’s what the research strongly seems to suggest.

In the security profession, trust quickly became regarded as a bad word. In IT, the disdain for trust probably began in the 1990s, when the Secure Shell introduced its half-baked hat-tip to the difficult problem of verification. When you connect to a machine you’ve never met before, the protocol designer was aware that the IP address and hostname read from the connection might be spoofed, i.e. be fake. SSH asks users: ok, we’ve opened a connection, do you trust that this server is not lying about its identity? And by the way here’s an incomprehensible hexadecimal fingerprint to help you make your decision! Naturally, most users think: “Yeah, whatever, how the hell should I know?” So, in practice, we all say yes without blinking. The same issue was solved in PGP by asking random strangers to sign a petition for the authenticity of someone’s key. Key signing parties were all the rage for a little while, principally and comically underlining the fact that IT people have no social lives, at least not without work-security balance.

The designers of these tools probably thought they were doing their due diligence and bringing potential risk to the attention of users. In fact they were brushing aside the subtlety with a simplistic tap of the keyboard. Notice the words: risk and attention! In our project, we’ve shown that risk is the opposite of trustworthiness or potential trust, while attention is the opposite of kinetic trust.

What this should tell us is not that people are stupid or lazy, but that trust exists precisely to simplify these types of interaction. We trust because it saves us time, or from breaking our heads to try to figure out what is true or false. Luckily, the secure shell doesn’t try to calculate trust. It leaves the decision to the user, allowing us to continue, taking whatever risk might be involved, but of course it somehow trivialises the issue by talking about fingerprints, which was the beginning of an unfortunate implicit association between trust and cryptography.

Of course, in a tiny number of cases, our decision to not dwell on the potential risk posed by false identity will expose us to manipulation and exploitation by bad actors. Is there anything we can do about that? In a recent book with Patrick Dubois, written while working with the Orchestra security software group, we argued that the best one can do is to try to manage the risks after they’ve occurred.

Figure 1. Our book on risk management from a Promise Theory perspective

Trust is the opposite of security?

The idea that trust is the opposite of security grew into a naive narrative, hardened by an endless number of papers about security that try to explain trust as some kind of infallible crypto token–as if we didn’t already know that passports and identities can be faked!

The shift in thinking is to sacrifice an immediate judgement (kinetic trust) about identity, in favour of a long term learning of trustworthiness (i.e. potential risk) together with a plan for responding quickly to incidents. This shift is sometimes called a strategy for resilience. In the short term, resilience is our ability to survive and recover from loss with a minimum impact. Over the long term, it’s an ability to learn and adapt our behaviours to support those short term goals. What we learn from even the most basic study is that we cannot predict the future. We cannot predict potential loss. All we can do is plan to recover as quickly as possible, to prevent the harm from being compounded by inaction.

Trust is always greater than zero

Today, we have the marketing concept of Zero Trust Computing, which one struggles to say with a straight face, because everyone knows that there is no version of reality in which any system can involve zero trust. Every promise we rely on assumes the truth of its assertion. If you don’t trust someone’s identity, you have to trust the thing that verifies it. If you believe that cryptography is infallible, then you are trusting cryptography, and so on. If you trust in zero trust, you’re still trusting. It’s trust all the way down.

Even if we all built every part of every piece of software ourselves, without assistance, we would have to trust the language we used, or the operating systems, etc. it would be impossible to not accept some information without verification. How do you know that this is my real face? How do you know this is my real fingerprint? Homer, are we there yet? Are we there yet?

If we understand the role of trust as a cost saver, this becomes completely clear. A low level of trust is expensive. There is basically no story in which there can be zero trust, or indeed in which a finite amount of verification (even by the most powerful cryptography) can protect us from exploitation or attack. The time you spend verifying someone’s concept of Zero Trust is simply time spent effectively attacking yourself by spending time and money on something other than what you’re supposed to be doing. There has to be some balance in the grand scheme of verification. Precisely this is what trust is for.

By treating trust as a moral issue, many professionals will naively distinguish accidental harm from deliberate harm. But dealing only with intentional harm is simply negligent. Whether harm is intended or unintended is only of interest when seeking compensation or revenge. It’s usually decided by a court of law in civil society. It’s certainly not something that an online service can decide. The chain of causality might be complicated after all. A malicious person calls a careless person to persuade them to do something they believe to be good, which in fact plays into an evil plan. Literature is full of stories like this.

Dance like a butterfly…

We can never avoid a small risk of attack. As I argued in a recent book about risk management (see figure 1), the best we can do is to respond to attacks and recover swiftly. This is quite well known and widely practised in corporate circles.

The Downstream Principle in Promise Theory indicates that the ultimate responsibility for protection always lies at the receiving end of a promise: avoidance of risk is the main thing we have control of. We can’t predict others’ actions or impositions (whether they are intended or unintended), but we can be quick to repair and recover from their impact with our own intent. Speed is the best defence against harm, because one can’t predict transient surprises. Surprise is an arms race against time.

Learning from incidents is a good investment either way. Enlightened parties separate the process into two parts: diagnosis of causal pathway leading to harm (without blame attribution) and blame attribution in a court of law to determine guilt, where appropriate.

Trust and groups

So, how should we best watch over, monitor, and govern our assets and processes to minimise contention? This is where trust helps us to decide how much effort to expend.

What we know from studying human interaction is this: if you don’t want there to be disagreement, avoid putting people into groups. If you have to put people into groups, the size of the group will limit the kind of task they can accomplish without infighting. It’s now commonly accepted knowledge that small teams (2-pizza teams etc) are efficient for problem solving. The more intense the work, the smaller the group should be.

Resilience, on the other hand, encourages larger groups when redundancy and resilience from harm are needed. Critically, this is an unstable arrangement with a high cost unless there is a specific seed, or reason to mistrust an outside threat, so the contact between members of a large group has to be quite low. If an entire company had to talk to everyone every day, it would quickly fall apart.

I once worked with a company where a leader tried to talk to everyone each week: basically, hello, goodbye, tick a box. It was a stressful waste of everyone’s time: the analogue of what security professionals call “security theatre”, i.e. going through the motions of a pantomime to ward off evil spirits. A meaningful interaction builds a payment of trust, which then lasts for a time before needing a refill. Hello-goodbye doesn’t really cut it though. Humans need a seed to glue them together: a common cause, a mutual interest. Similarly, walking through a metal detector rarely finds anything of interest, but it focuses people’s minds on safety for a brief moment. It captures one’s attention, or provokes a small investment of our kinetic mistrust.

Anthropologist and psychologist Robin Dunbar has long pointed out that group cohesion in animals comes at the expense of grooming one another. We’re past the time when we pick fleas out of one another’s fur, but metaphorically we still do this to maintain our social circles. When you need a peer review, or some other kind of grooming, you are building alignment of intent with your peers. Small group work is a good strategy, assuming that the people in the group are already more or less aligned. Otherwise there might be a long and rivalrous road to harmony. The larger the group, the thinner you spread that goodwill.

METCALFE’S LAW: the potential value (positive or negative) of a network is of order Nˆ2 in the number of agents, but this is only the limiting value. It isn’t necessarily realized unless the processes support it. The glue to keep a group together is usually to limit growth to order N–in a hub or hierarchical configuration (figure 2).

Figure 2. Groups are cohesive when they form around common seed, whether threat or leader.

Crime and punishment

Among the serious challenges we face are when we confront criminal activity: mobs, gangs, extortionists, scammers, etc. A criminal is basically someone whose intentions are not aligned with our norms. A security professional sees this as a reason to invest a lot in attentiveness, but that distraction prevents us from doing what we’re meant to do. That’s a heavy ransom to pay.

Investing in alignment to prevent a niche for such opportunists is of course the main long term strategy for fighting crime. We know that technology can assist us both in avoiding and detecting malicious intent, but the intent begins in a human mind.

  • Phishing is a human hack that requires a governance solution. Norms and standard promote alignment. Communications providers can deploy tools to identify deceptions, bad actors. But, there’s the problem of misadventure, carelessness, negligence, and many more variations of intent.
  • Can we use profiling (e.g. machine learning) to anticipate intent? This was tried extensively after the 9/11 attacks. It didn’t end up with a good reputation. The methods themselves, inspecting “packets”, monitoring faces and clothes, etc cost much and this may backfire by leading to an erosion of perceived trustworthiness. So they may undermine their own usefulness in the end.

Today, our delicate attitudes towards the use of power to win advantage, whether safety or exploitation, are experiencing a revision. Trends from #MeToo to employment hiring attitudes see contention as the Internet opens up the market to too many at once. The cost of detecting actual harm is the counter-harm we trade along the way. This is the new social dilemma.

Our patience for preventative measures is limited, and our sense of trustworthiness can be eroded by excessive inspection and monitoring too–breaking up alignments between friends and turning them into enemies. Hawkish politicians weaponise such methods every day, by accusing political adversaries and foreign nations. It might give them a short term seed to rally a certain group for a while, but ultimately it’s counterproductive.

The hard security challenge is to formulate an appropriate policy for prioritisation of intent. No one will ever truly agree about what’s most important, because everyone has their own assessment. The nagging truth is that this tends to be ad hoc unless one can maintain alignment over the long term. Coming to this mutual agreement is the function of “grooming” in the Dunbar sense.

Does any of this help us to have a safer future? Perhaps not. Danger is a moving target. After all, our concept of trust adjusts as the situation adjusts. We can always learn patterns, but how useful will they be? How reusable once learned? Unless we live in a sufficiently stable environment we can’t tell an anomalous wood from an anomalous tree. I like to think that we could strike a note of optimism about the future and err on the side of idealism. What else is there to hope for?

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

No responses yet