Should we trust what we read?

Due diligence versus brand loyalty

Mark Burgess
8 min readJul 5, 2023

TL;DR : human attentiveness has dwindled! The pace of modern cybernetic life saturates our cognitive faculties and leaves us unwary of concealed intentions, even vulnerable to suggestion. Once upon a time, we could take our time over breakfast with a newspaper or magazine and ponder the long and the short of the news: or we might settle with a good book, or listen to a debate, and enjoy its subtle delights. Today readers complain if any message exceeds “elevator pitch” level or “Internet meme” status. This leaves a nagging feeling that we are unsure of whom or what to trust. Seeing how kids multitask in everything on their smartphones should make us suspicious of their actual level of attention in any one thing. We’re still learning how to navigate this new reality.

The trust project

In previous articles about trust, I looked at how patterns of regular activity in online services could be used to gauge predictability in keeping promises: timing of responses is one way of gauging behaviour in a quantifiable way, when you have promises implicit in a service interaction. But there was an aspect of assessing promise keeping that was missing. That’s what I want to think more about here.

Intent is signalled through promises, and measuring promise keeping is easy when it’s about simple countable actions, but some promises are more hidden from view. Trust is not only about completing tasks: it’s about what people say as much as what they do; after all, saying is also doing. Focusing too much on Internet services as transactions neglects a deeper issue: the chatting and the blogging that forms a big part of what transpires on the Internet is where trust is more powerfully exploited. We need to build trust into these forms of expression too. That means we need different tools to analyse text.

Automating the search for meaning

How should we rate a user’s intentions? One could ask chat GPT or similar bots whether they consider a user trustworthy, but they are programmed not to make subjective judgements on ethical grounds. LLMs can tell us, at great expense, what the gist of a text is, even the dominant emotions within it, but that isn’t the same as whether its originator is trustworthy, or whether its consumer should trust it and at what level. The end user still needs to make that judgement themselves. At best, we can try to assist in that end user inspection process.

Professional copy readers know, for example, how to skim text: they look at title and summary, flip through to look for keywords, read the end, then if they’re sufficiently impressed they go back and make several passes with increasing levels of scrutiny. Casual readers may tire of trying to wade through from start to finish much more quickly. Like anything there’s a technique to seeing what’s there.

Abstracts and summaries of articles by writers and magazine editors were supposed to help in this, but they don’t. Today, they no longer seem trustworthy as reflections of content, because they are widely exploited as grabbers of attention, designed to titillate rather than to communicate. They’ve been appropriated as marketing real estate, as sales opportunities rather than as accurate information services. Increasingly, the content within doesn’t live up to the expectations of a title or a summary. Exaggeration is one thing, “burying the lede” is another familiar ploy in journalism and politics: writers are encouraged by economics to exploit trust or laziness to tell us what we should think about an issue rather than enable us to form a fair and unbiased assessment on our own.

Figure 1. Trust and trustworthiness in daily life. Where do we invest our atteniont?

What does text promise? Textbooks are different from novels. A textbook is about something obvious: there are subjects and things. A novel might not really be about its subjects. Instead, it can be about feelings and conflicts. For example, in our thing-oriented philosophy, we might say that Moby Dick is about a whale and a man, but actually it’s about obsession and a sense of fate. A machine analysis trained by humans might say the former, while our method here reveals the latter.

TMI

One of the pernicious developments creeping up on us, since the turn of the millennium, is how laziness and over-subscription lead to a reversion to tribalism as a basis for trusting. Given a lack of time to study issues at length, we tend to believe brand over diligence. If someone we align with wants to tell us what to think, they will succeed more by familiarity than by logic or reason. This conclusion already makes sense from the coarse graining hypothesis.

Our media immersion suppresses our assessment skills and turns us into simple push-button automata, responsive to party membership dictates and tribal edicts–suspicious even dismissive of rival teams without rational consideration. The resumption of tribal behaviours is well established, encouraged by smart devices and social media channels that create echo chambers for alignment. Witness how new outlets, sports teams, and political parties manipulate labels and slogans to win acceptance. I wrote about this in Slogans.

It would be wrong to suggest that manipulation of our attention only applies to politics and journalism. It affects science and education too. Differences in jargon, in cultural usage of words and phrases, and simply in ability to write clearly affect our ability to find meaning in written words. How shall we determine whether a piece of writing keeps its alleged promises? What shall we think if it promises nothing?

In IT, our crypto-signature or IP address are the only labels we have. This leads to a very simplistic idea of trust as a kind of tech xenophobia: I think you are probably going to hurt me on the basis of your 32 bit identifier. We infer almost everything about encounters in IT from user credentials, with only the speculative context about our own goals to assess the intentions of others. Meanwhile, more significantly, out of sight out of mind, shadowy legal “Terms and Conditions” haunt the unspoken hinterlands of meaning should anyone feel the need to go against the grain. We trust their existence doesn’t matter in the long run.

Replacing scrutiny with lazy assumption is a symptom of oversubscription in cyberlife: one person’s trust is another’s negligence. Burying something in irrelevant detail is a strategy for pulling the wool over someone’s eyes. So we need to strike a balance.

All these issues suggest that we need some help to manage our attention, but if we don’t want to be told what to think by manipulative humans, should we really propose asking an AI to tell us what it thinks? We might prefer tools to help us help ourselves to read and judge intent. In the latest part of my trust project with NL net, I’ve explored a proposal as an alternative for text based analysis.

Semantic Spacetime Model

The Promise Model of trust is not a sophisticated deep training AI system, like the Large Language Models that have grabbed headlines lately. It has developed an approach to assessing meaning, without actual comprehension, based on a hypothesis about the elementary semantics of spacetime processes. I wrote about this in detail in my book Smart Spacetime. Its ambition is far simpler than any AI language model, and it might actually help to explain why LLMs work.

The idea is that the roots of meaning are something simpler than we imagine: something that evolved from elementary distinctions about our environment. A simple organism can only discriminate up from down, left from right, before and after, etc. These are the basic notions from which all concepts must be built.

Although removed from sophisticated semiotic analysis, we can use these simple signal tests to hunt for the cognitively rich parts of a text, in the hope that these somehow carry the weight of meaning–and allow us to sample selectively ourselves. We make the assessment, but we have a special software magnifying glass to show the details that stand out. It’s based on n-gram analysis, something like DNA fractionation in bioinformatics. Biology teaches us lessons all the time.

Concepts are like mountain ranges against a flatland of background information, obelisks of signal that stand out as anomalies. A landscape anomaly doesn’t depend on language, so an anomaly approach works equally well to select importance in English or Chinese. It’s a crazy idea that we encode meaning in the rhythms of language rather than in the words, but it works surprisingly well from a human perspective. We are not cold calculators of Taylorian quantitative efficiency after all. Our impressions are dominated by how we feel about what we hear or read.

I wrote two papers exploring this idea some years ago, and now I apply this to the trust project, because trust is both more subtle and less intentional than its moral reputation suggests. Ultimately, it’s a blunt instrument for skipping details.

The stakes are medium rare

We need to rethink how we use trust as a proxy for acceptance in our attention-strapped society. As narrative becomes ever more important as a vehicle for attention seeking, we have entered a war of trust. Why should we trust what we see, hear, and read? We have to take responsibility for a new kind of learning about new forms of agent, and new levels of indirection. What I read today may guide me for the next twenty years, far beyond the time I’m aware of its potential danger.

We should think again about trust, to learn once again the value of honouring curiosity and interest, and in giving diligence its due. It boils down to this: shall we invest in depth or breadth with our attention? How do we make that trade-off?

Figure 2. The Twitter Terms and Conditions summarized at 40% trust by the semantic spacetime sampler

Trust is an economic issue. Whether we avoid reading every word in a book, or avoid sampling every dish in a restaurant to judge it, or whether we forego cash salary for options in our workplace, it’s a decision rooted in realism. It’s part of being selective about what’s promised by another agent to balance one’s own needs.

When it comes to conceptual baggage, the evidence points to the idea that we’re more likely than ever to accept influencers uncritically on the basis of brand rather than quality. We’d be well advised to selectively turn up our level of mistrust from TL;DR to deep seated curiosity, even as we admit that not everything can be given the level of attention we’d like. This is probably an area where automated systems can help to protect people from their worst instincts.

Give us this day our daily breadth,

And forgive us our details,

As we inspect those whose details offend us.

Lead us not into obsession,

But deliver us from tribalism,

For time is our kingdom,

And now is our hurry,

For ever and ever.

Nuff said.

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

No responses yet