Semantic Spacetime Meets Neuroscience

What can Promise Theory tell us about cognitive process?

Mark Burgess
16 min readAug 30, 2022

Space and time play central roles in almost everything that happens in the world, but they are also subjects that are designed to be overlooked and suppressed as background issues. Thanks to popular science, we often think of space and time as something to do with the outer space of Einstein or Hawking–the domain of physicists, and the idea of an “inner space” sounds just a bit too New Age and hippy to be scientific. In this short essay, I want to show you that, in fact, our idea about space and time is too limited, and that network science helps to unravel an deeper understanding of processes, especially in connection with neuroscience. With this reorganized understanding, many phenomena reveal space and time to be far more central to explaining behaviours than we imagine. In particular, they lead us to a natural approach to cognition, to reasoning, and to adaptive processes at every scale.

Physics meets Computer Science and Information

As a physicist who turned to Computer Science and Information Science, phenomena in space and time have long been an issue that I’ve been keen to understand. It took me 20 years to unlearn a lot of what I thought I knew about the subject in physics in order to make progress. Like any subject, physics adapts to particular phenomena, but the broader popular narrative around physics today often presents a very curated and simplified view of space and time.

My connection to neuroscience came out of trying to apply the concepts of learning and scaling to the semantics in information systems, in Computer Science–where the concept of scaling is only poorly developed. It was an entirely accidental association — as a prediction of what has to happen in spacetime when you scale process by increasing complexity. Scaling is a subject that touches on fundamental measurement issues, including measurement. Measurement is itself a spacetime issue involving scale.

A decade ago, I began to write a series of papers, and later a book about unifying semantics with spacetime measurements. The aim was to integrate the qualitative with the quantitative. It’s an area that neither physics nor computer science has done a good job at explaining previously. Physics tries to suppress semantics in order to appear universal and impartial, while Computer Science tries to suppress scale by building semantics around it (large expressions become subroutines and programs then agents, etc).

The result of unifying dynamics with semantics was something called Promise Theory and its application Semantic Spacetime. It's closely related to graphs, processes, and even games. As a fundamentally process related theory, the question of observation of one process by one another arose, and led to the idea of interior cognitive processes with cycles of steady state activity to sample the world outside. As such processes scale, surely you could end up with reasoning systems and agents that behave something like a brain?

One of my first questions, at the time was therefore: what is the role of brain waves (i.e. cyclic rhythms) in neuroscience? What kind of waves are they and how do they relate to the processes of memory inpregnation and retrieval? What are the frequencies and wavelengths? Are they travelling or standing waves? I had no idea. I asked Twitter, and my friend Jim Rutt pointed me to Gyorgy Buzsaki’s first book Rhythms of the Brain, just at the moment that his second book The Brain From The Inside Out was coming out. I bought both of them (see figure).

Gyorgy Buzsaki's books on brain processes.

I had some predictions from physics and from my own work about why waves and cycles have to be important features of any dynamical process–this has to do with the Nyquist-Shannon sampling theorem. I’d been applying those findings about scaling to Artificial Intelligence or Knowledge Representation for many years, in order to design autonomous software processes (robots), like CFEngine. I thought that understanding brain waves might help to explain something about dreaming, but I knew basically nothing.

The books I found were quite dense for someone with so little knowledge of neuroscience, but I’d done enough bioinformatics and immunology in the past to not get completely lost. As I read them again and again, I learn new things a little at a time. Something immediately jumped out at me however: remarkably, the author was arguing from his neuroscientific perspective, about the brain, in exactly the way I was arguing about process organization from generic principles of space and time for network interactions — namely that the semantics of spacetime processes are at the root of how we need to understand cognition.

Each year I follow the Nobel Prizes and make it my mission to read some of the winners, so I’d heard about place cells and grid cells in the brain. Suddenly I could see how several of these ideas were exactly what semantic spacetime predicted for a natural encoding of processes as information across different scales.

(Metric) Euclidean coordinate view versus (Semantic) Landmark view of a navigation.

Of course, I wrote to the author Gyorgy as I always do when I'm excited to find someone interesting. Unusually, he was gracious enough to reply — and even read my own book. That’s how I got to be writing this.

Semantic Spacetime

Perhaps the casual mention of space and time sets off certain ideas already. You might think you know what the role of space and time means, but I suspect that it’s not exactly what you might imagine. When we look at the role of space and time in computation, it looks very different from the picture that Euclid, Descartes, and Newton adopted — as a continuous theatre with an infinity of points. We’ve developed a very particular idea of what space and time are in physics, because the physics of motion grew up around ballistics — the trajectories of arrows and cannon balls for fighting wars. Only much later did it turn to look at phenomena like containment, cycles, and scaling of behavioural patterns.

How we describe change is the central issue in any type of computational or cognitive process. Change in space is how we create memory for keeping information. The external world keeps one form of memory in its arrangement of physical things. Brains and computers keep another kind of memory in virtual representations of those things. The combination of a location (address) and value give us a form of memory that we call a field in physics. So, from an information physics point of view, space is simply nature’s memory — physical or virtual. But, those fields are not really continuous, only specific pathways involve actual realizable processes — the rest is speculation. We need to take account of the more detailed processes as networks.

This tension between continuum and network representations of spacetime is present even in the physics of fields. Feynman diagrams show the network structure of processes that make up the patterns of interaction in physical fields. The sum over a continuum of speculative points, which turns the diagrams into a field, leads to a lot of redundancy that has to be renormalized away. That's perhaps the price of idealization. In a field theory, all the properties and processes are modelled by being externalized between the point locations.The difference in Promise Theory is that the nodes in a network are not just featureless locations, they are active agents, with interior processes. So agents become the sources of properties and activity, they aren’t merely a background in which activities take place. A Promise Theory network means:

  • Agents are autonomous, i.e. each location is causally independent by default.
  • Agents interact by promising each other certain behaviours, which they are themselves responsible for keeping. Interior processes keep promises about self.
  • No agent can make a promise on behalf of any other. This expresses “locality”.
  • Autonomy implies that what is offered by one agent to another has to be explicitly accepted by a process (sampling) of the receiving agent–else nothing is transmitted.

Fleshing out these rules leads you to Promise Theory, which is surprisingly rich in its consequences for such a simple reorganization of thinking. Applying the implications of Promise Theory to how we describe change leads to the notion of Semantic Spacetime.

In technology, Promise Theory has already changed the way we build solutions. The computing cloud and the Internet giants like Facebook, Google, and Linkedin use these principles to scale processes to massive size.

When I started thinking about these ideas, computer systems had hundreds of perhaps thousands of computers. Today, there are tens of millions. Indeed, technology is now approaching biological numbers of agents. As a result, we’re starting to see phenomena and strategies that are more familiar from biology — like cellular design, redundancy strategies, and we even see phenomena that look like quantum mechanics, such as entanglement and coherence. These are still far from being well known or well understood either in academia or industry, but this simple picture of processes as networks of autonomous agents connecting and promising over different scales leads to rich semantics.

Naturally, the king of all such systems is the brain.

Hardware and Software

The processes that we call virtual processes are a kind of software that runs on a layer of hardware. The distinction isn’t altogether clear, but we like to make it anyway. Virtual means something to do with the ephemeral states represented by the hardware. Timescales are involved in the distinction.

A wave is a virtual process, which involves the states of many locations along its path. Software is about how patterns encode information, so waves are also software running on the underlying hardware of a physical substrate, with a particular scale determined by the wavelength. Water waves are run on water. Electromagnetic waves are run on "the electromagnetic field" or "vacuum", whatever that is.

Processes in spacetime that represent symbolic information link to ideas about mathematical grammars of the Chomsky hierarchy, Turing’s work on computation, Shannon’s Information Theory, and of course semantics and knowledge representation. These links tie together spacetime and all the processes behind cognition and reasoning.

In Newtonian Euclidean physics, spacetime focuses on what things are next to each other, or the concept of adjacency. But adjacency is only a part of a larger language that actually describes process. We should also talk about how processes repeat patterns in cycles, orbits and waves. Cycles are associated with spatial or temporal loops, and those are implicated in steady state processes involved in the transmission and sampling of unexpected information (like sensory data).

Phenomena also grow from small to large, break up into cells, or combine into functional structures to express patterns. What happens to promises made by the agents at different scales? Suddenly agents that started behaving as a resistor, a capacitor, and a semiconductor combine to become a radio, or a television, starting a whole new language at a different scale.

In semantic spacetime, points aren’t featureless coordinates, they are agents. They describe an inside and an outside, like cells, and this question “Am I inside or outside a process region?” turns out to be at least as important as the question of whether something is next to something else or not. The scaling of spatial containment takes us from quarks to atoms to molecules to cells to organs and organism and ecosystems–from bits to bytes to words to documents and programs. At each level, it’s a story about processes with semantics, but the promises are very different on each combinatoric scale.

One of the challenges is to understand how combinatorics lead to the emergence of new languages, new concepts, from smaller impulses. This is where the concept of Semantic Spacetime is useful.

The Four Horses of Semantics

Semantic Spacetime predicts that there are four key semantic relationship roles for connections in a semantic network. These are generic and can be adapted to different purposes, if we don’t take their linguistic expressions too seriously. Indeed, their true underlying meanings are that they correspond to notions about space and time. We can explain them approximately as follows, with generalized categories on the left and example meanings on the right.

Remember that space is memory. In neuroscience, memories are explicit encodings: episodic strings (timelike) or semantic regions (spacelike). There’s also implicit memory, like muscle memory for those who play musical instruments, which I don’t know how to distinguish from explicit memory. We ought to expect these four roles in a brain too.

We have to account for timescales for memory processes. Agents must have cyclic processes to sample and integrate information from outside to inside. This cyclicity comes from within (inside) agents. Cycles act as a clock, setting a heartbeat or basic timescale for processing. Semantic Spacetime predicts that short term memory forms a contextual map which can be used to address or allocate space to long term memory.

A short term memory is usually a simple token in semantic terms because that's all there is time to process, but a long term memory has to be more robustly identifiable across many different contexts, so we expect it to be a much larger structure activating on a larger scale, slower to form. The cycles associated with these two scales could be associated with different brain cycles. Theta cycles seem to be associated with the sampling of information from within and without when integrating information. Lower frequency or longer waves experienced during sleep may be able to pick out mainly general concepts, not specific high resolution memories. Perhaps that’s why dreams are quite low resolution in detail.

In AI, the role of cycles is quite obscure. Artificial Neural Networks (ANN) work mainly by ballistic injection — like sorting funnels, but there are imolicit cycles involved in determining the network memories to begin with. They are obscured by the data collection and supervised nature of the training, but they are certainly there–buried in the processes. The difference between short and long term memory is also there as the difference between a Deep Learning Network and a simple lexical token, for example.

Stories as semantic processes

Semantic Spacetime predicts that semantic memories form stories, i.e. trajectories through a multidimensional semantic spacetime. A quick back of the envelope calculation (see the figure below) shows the organization of memories along the four network association types.

A snaking story is a thread connecting the flow of structures connected by the four semantic assocaitions.

It’s easy to fall into lazy computerized thinking when tying together these different examples. Memory doesn’t have to be a pattern at a fixed location, with a numerical address. That's not how brains work. In fact, as long as there is a persistently addressable process (say like a spinning top), that is enough to remember information. Addressing is another process of discrimination, more like a kind of semantic interferometry. We forget to think dynamically because we are used to frozen language in books and printed materials. Writing may be static, but the process of reading is dynamic. Speech is always dynamic in the sounds that form symbols. Computer memory looks fixed and lattice-like from one scale, but we need sampling cycles of a CPU to manipulate it. If you zoom in far enough, you’ll also find the electrons at work; there are always dynamic and redundant processes at work to maintain that illusion.

Fantastic beasts of scale

Scale surprises us, because we humans are designed to operate at one fixed scale, which is the scale of the animal kingdom. Size is only one kind of scale though. Another has to do with information. Interior complexity (the amount of pattern and process enclosed by a region of a certain size) also leads to what we might call semantic scale. Perhaps unsurprisingly, although physics is the preeminent study of scaling, even physicists struggle to imagine the broader implications of scaling. Computer Scientists, by contrast, have never really understood dynamic scaling at all, except in queueing, and even semantic scaling is mainly understood through the Chomsky hierarchy of process complexity. Semantic Spacetime tries to bring together and unify these ideas. The role of networks of agents is the key, because this allows us to separate inside from outside (containment)–while physics is mostly obsessed with next to (translation).

Let me give a couple of examples of how scaling leads to the unexpected ideas about cognition (agents that adapt from within to change from without). German forestry researcher Peter Wolleben showed in his book The Hidden Life of Trees how communication channels in forests create reasoning networks through fungal messengers, on a very slow timescale. If forests are like brains, their thoughts could take years and even decades to complete an idea. James Lovelock’s Gaia hypothesis was a similar idea: is the whole Earth a kind of reasoning system? Well, it could be, but how would we know? Whatever thoughts such a system might have, we would almost certainly have nothing in common with them, nor could we live long enough to see them to completion. Scales are everything.

This scaling idea is the same argument I used to argue against the many unreasonable claims about the emergence of reasoning in Artificial Intelligence too. If machines ever became spontaneously intelligent, we might never know, because they wouldn’t experience the world anything like we do. Immunology gives us one perspective on what reasoning looks like at a different scale. From a semantic similarity point of view, immunological recognition and reactive processes qualify as reasoning, but nothing like what most of us would consider to be conscious thought. We need to be more imaginative and flexible in the use of these concepts if we want to make progress.

Space isn’t just about lengths. Orientation and symmetry also matter. Biology shows how properties EXPRESSed can lead to symmetry breaking, and provide key phenomena in agents, including cephalization and axial symmetry for a biological type of motion, e.g. by swimming. The Newtonian concept of momentum is much too idealized to allow us to generalize it, but quantum mechanics and virtual motion actually show us how wave processes connect momentum to axial cephalization across unimaginable scales.

This brings us back to brain waves. A wave has a scale in space and time, and memory has a certain scale too. Which frequencies are responsible for memory processes in a brain? From the perspective of a partially educated information scientist, the famous theta waves seem to play a role something like a Nyquist sampling process. The faster the cycles, the more specific memories you would expect to pinpoint. The longer the wavelength, the more generic memories would be provoked. Could this tell us something about how dreaming works? Dreams rarely fill in any details they don’t need to. You might know the name of someone in a dream, but they look nothing like the memory you have of that person. There are some interesting paradoxical processes at work.

Hippocamp and pesky emotions

Processes that generate storylike narrative without outside help, like dreams, tell us that something in the brain has to retrieve random memories to string together (often with great imagination) into a timeline or story. Neuroscientists believe that the hippocampus is involved in this. It has a nice spatially cyclic structure that makes it a good candidate for forming a sampling cycle, but much is yet to be discovered. It probably can’t be alone in the functions of conscious experience, because there needs to be another executive process integrating and assessing the possible relevance or desirability of the outcomes.

Ironically, my own studies on the semantics of information suggest that emotional assessment is likely a key mechanism to path selection. There has to be a ranking, i.e. a relative or quantitative arbitration, of the symbolic memories — just as evolution or epigenetics make selections based on population resonances at different scales. The path of a long term memory process, which resonates with the immediate short term context, is the one that’s selected by the short term process. This interplay between semantics and symbolism on different scales is the likely recipe for complex reasoning. Forget about fractals, we see these ingredients for pattern recognition all across nature.

In science fiction stories, aliens and mad computers are always trying to eliminate pesky and unwanted emotions to become perfect examples of pure logic. I think this is the biggest misunderstanding about reasoning that we perpetuate. From my own work in knowledge recognition, emotion is a natural candidate for being the final arbiter of reason in assembling episodic stories from memory. Logics are only artificially constrained idealizations of reason. They try to eliminate references to subjective emotions, by throwing layers of conditionality over stories by their process rules, but that’s a whitewash. There is no such process in nature. Choices are made, more like Feynman diagams and interferometers, by generating a bunch of possible options, and selecting the one that you find most satisfying. Similarly, when we talk about understanding, it's emotion at work. We explain a phenomenon in terms of another, and another, and eventually we give up asking for further explanation, because we trust some part of the answer. It’s when we’re happy to trust without question that we finally feel satisfied with explanations. This is how Zeno’s paradox is solved in reasoning. On the scale of social sciences, it has also been proposed that trust forms a kind of action potential for decision making.

Neuroscience and spacetime

How does any of this help us to understand brains? On one level, spacetime offers a useful approach to organizing thinking about processes (there's a meta process if ever there was one!).

Space and time are what bring order to change.

Thinking in processes instead of structures helps us to understand the world and its scaling better. The power of Semantic Spacetime is to identify what Newton called dynamical similitude. Similar phenomena may have similar explanations–even across different scales. It gives us something to look out for.

Experiments in neuroscience and in quantum physics are both difficult and expensive; experiments in computer science are much easier. What we are finding is that, as computer get more complex, we begin to see phenomena that look increasingly biology and quantum mechanics. That's surely something to do with scale. It’s worth looking for similarities, especially in a much simpler and more accessible arena as information systems.

The brain creates the sensation of consciousness, of a running storyline in our sensory experience. It happens with outside help during the day, and without outside help during the night. The differences between awake and dreaming could probably tell us a lot about how this experience is manufactured. Semantic spacetime gives some clues. Such a process needs cycles to drive it, and probably multiple process agents to observe and assess one another in compartmentalizing memories on different scales that it draws on. The difference between temporal evolution and generalization will play a role. These are the four horses or perhaps hippos of semantics. Structurally, I believe that this timelike generation is related to language, i.e. a serialization of quasi symbols at different scales. It suggests that all conscious beings would naturally have some kind of language process internally, because the process elements are so similar.

In most people, conscious experience is a single meta-process. What about in people with multiple personalities?

Coda

There’s a lot of blind belief and politics in Artificial Intelligence. There’s money and glory in them thar hills. But, politics aside, AI researchers are doing fine work in the limited area of cognition, piecing together syntactic pieces of scaled symbols on a cognitive scale, but with relatively little work on story generation on a generalised level. Concepts like understanding can’t even be formally defined in most work — these are challenges to be met. That’s where Semantic Spacetime presents opportunities for discussion between experimentalists to build a larger theoretical narrative around the subjects of cognition, intelligence, behaviour, and brain science. It’s early days, but there are fascinating opportunities.

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

No responses yet