Should we trust what we read?

Due diligence versus brand loyalty

Mark Burgess

--

TL;DR : human attentiveness has dwindled! The pace of modern cybernetic life saturates our cognitive faculties and leaves us unwary of concealed intentions, even vulnerable to suggestion. Once upon a time, we could take our time over breakfast with a newspaper or magazine and ponder the long and the short of the news: or we might settle with a good book, or listen to a debate, and enjoy its subtle delights. Today readers complain if any message exceeds “elevator pitch” level or “Internet meme” status. This leaves a nagging feeling that we are unsure of whom or what to trust. Seeing how kids multitask in everything on their smartphones should make us suspicious of their actual level of attention in any one thing. We’re still learning how to navigate this new reality.

The trust project

In previous articles about trust, I looked at how patterns of regular activity in online services could be used to gauge predictability in keeping promises: timing of responses is one way of gauging behaviour in a quantifiable way, when you have promises implicit in a service interaction. But there was an aspect of assessing promise keeping that was missing. That’s what I want to think more about here.

Intent is signalled through promises, and measuring promise keeping is easy when it’s about simple countable actions, but some promises are more hidden from view. Trust is not only about completing tasks: it’s about what people say as much as what they do; after all, saying is also doing. Focusing too much on Internet services as transactions neglects a deeper issue: the chatting and the blogging that forms a big part of what transpires on the Internet is where trust is more powerfully exploited. We need to build trust into these forms of expression too. That means we need different tools to analyse text.

Automating the search for meaning

How should we rate a user’s intentions? One could ask chat GPT or similar bots whether they consider a user trustworthy, but they are programmed not to make subjective judgements on ethical grounds. LLMs can tell us, at great expense, what the gist of a text is, even the dominant emotions within it, but that isn’t the same as whether its originator is trustworthy, or whether its consumer should trust it and at what level. The end user still needs to make that judgement themselves. At best, we can try to assist…

--

--

Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org