Semantics of Spacetime and Cognitive Processes

What does a model of network spacetime have to do with understanding meaning and intent?

Mark Burgess
22 min readSep 20, 2022

(Based on talk given at Kavli Foundation Salon, Oct 2022)

Neuroscience overlaps with networking in a number of ways, and on a number of levels. Let me discuss one case that concerns the way we model space and time in cognitive phenomena. By treating space and time as a network of agents, we gain a kind of semantic encoding and retrieval of episodes, which automatically leads to emergence of concepts. Moreover, the spacetime scales in the input allow us to define meaning and intentionality based on the sparse occurrence of patterns. Space and time play important roles in neuroscience, from the problem of sensory navigation to the processing of signals in brain networks.

Put simply, space is the set of variables that encodes state (i.e. memory) in any system. In school, we're taught to think about space and time in the Euclidean-Cartesian manner of coordinates and externalized processes. This is because physics was principally developed in Newton's time to describe states in ballistic phenomena, where states are positions and velocities. But this is too limiting in general. The kinds of transactional processes we find in biology, chemistry, economics, or even quantum mechanics, can't naturally be described in ballistic terms. In biology, state is encoded through proteins and activation levels inside cells, so memory is inside a cell boundary. Insideness and outsideness are a different way of modelling space and time, which is naturally taken care of in a graph or network representation.

A shift from externalized phase space to interior state space characterizes modern thinking. Some of our cognitive memory is inside us, but most of it is outside us, available for stigmergic cooperation with our future selves and other individuals.

Let’s use network spacetime to examine the origin of meaning from sensory experiencesotherwise known as cognitive semantics. This involves not one but two distinct networks: an outward looking network that collects and processes information, and an inward looking memory representation. The latter encodes and actively uses the information to adapt its own understanding with the help of the former. Both networks are dynamical processes, not just static cablings.

We define a cognitive system as one that’s capable of receiving a stream of sensory input, and which makes an effort to understand in some way. Some of a cognitive agent's memory is isolated and private, on the inside, but most of it is on the outside, available for stigmergic cooperation with our future interactions and with other agents. To think of an agent's process memory as only it's own private space, is thus a potentially huge mistake. The bulk of memory is external. How then may we understand the implications of this? On a formal (mathematical level) we’ll need to define what we mean by sensory stream, understanding–and indeed, meaning itself!

We begin with how a network can understand change — and why this is about space and time.

Agent spacetime

One way to view the world is in terms of agents, perceived at different scales and of different levels of space and time complexity. By agent, we mean some sort of semi-bounded partial system, like a cell in biology or a computer in IT. This isn’t the usual way we think of the world: it’s more common to imagine stories about identifiable things that move about in a theatrical continuum space.

An agent exists not only in space, it also actively does something in place. In computer science, we would say it has process semantics, i.e. it has a role to play or functional purpose that comes from within. From this point of view, it wouldn’t be wrong to say that a pretty important issue across the sciences lies in answering the question:

How can we understand the internal and external behaviours of agents, which can react and respond meaningfully to information, learn and develop concepts and form episodes and stories from an ambient environment?

The question is not unlike the more general manifesto of science itself. Yet, in the sciences, we have very different ways of approaching this kind of question. A physicist might look for an equation of motion for the agent, as a trajectory in some kind of Cartesian spacetime and be satisfied with that dscription, or perhaps hope to predict a rate or probability of change of some variable. A biologist might try to describe the agent in qualitative terms as a reaction from cell to cell. What does it interact with, and what is the outcome? A computer scientist might look for the software that drives the agent’s interior processes and network communications. And so on. Each version of truth reveals some preconceptions about the way the disciplines think — in particular, how they think about space and time, which is where everything happens.

Interior and Exterior thinking

In physics, we have a tradition of thinking everything happens externally, in the space between agents. Behaviour is ruled by ballistic models like billiards or pinball (also called memoryless or Markov processes). The agent is just a passive thing that gets batted around in some external coordinate system, where all the information is encoded as positions, times, and speeds. In biology, by contrast, cells are ruled from within, they contain factories for manufacturing proteins that are not visible from without. How they move in outer space isn't generally relevant. Their behaviours are ruled by persistent interior states and their positions in the exterior world are somewhat less important than the functions they express. In computer science, agents are computer programs that run on computers or virtual computers that might be moved around externally, as well as manufacturing output from inputs internally. These are both memory processes.

If we call a cognitive agent one that distinguishes interior from exterior, and promises to accept information from its exterior, changes its interior state in response, and then expresses some kind of processed output to its exterior based on that interior state, then this functional picture in fact describes all the cases above on a sliding scale of semantic complexity, from the dumbest billiard ball to the smartest brain. Agent models are powerful.

It’s all in the processes

The world is not so much a collection of things in a room as an assembly of processes tracing out some pattern of change. Processes involve some state that changes in space and time–you can’t measure either space or time without the other, so we might as well just talk about processes that involve both from the beginning.

Physics started by describing the motion of canon balls and arrows in war, and thus it formed an idea of change within a three dimensional memory space that Euclid and Descartes called (x,y,z). The variables were position and time, with velocity (process) sometimes called phase space, and space was a continuum of possibility. This model doesn't apply in the same way for the changes in variables in a computer program or in a cell. There, we have a finite number of states (an alphabet of some kind) and change might be both interior to the agent or exterior along some channel that connects two agents. Shannon described such channels in his Theory Of Communication. The collection of agents and channels is a network, either physical or virtual. The key thing is that it's causal.

Instead of using the locations of objects in a continuum as memory, a network of agents, each with interior memory, keeps its memory in delocalized private chunks. It’s not really a separate model. We can represent ballistic motion in this way too — indeed, dividing space and time up into a network of finite steps was exactly the procedure Newton used to construct the calculus, and this was also the approach used by Feynman to construct the path integral in quantum theory.

Promises and agent locality

So, without approximation, a very general starting point, is to view the world as agents connected by channels in a network. This general picture is described by Promise Theory — and its model of processes in a spacetime like this is called Semantic Spacetime. It forms a model that can describe motion, reasoning, computation, etc. If we make agents autonomous by default, i.e self governing, then a communication between agents requires a promise to send (+) and a promise to receive (-) between each pair of agents involved in a process that spans more than one agent. What each agent does internally with the information it receives can also be described by a kind of promise (e.g. a computer program, a DNA strand, or some protein formations).

Autonomy or locality of agents is the key distinction between Promise Theory and global network models here. A lot of people studying agents view them as governed from outside, not from within — like cellular automata with global rules defined by a godlike exterior view. Promise Theory thus embodies an extreme form of the locality principle in physics for agents.

Give and take in agent spacetime

By distinguishing (private) inside from (shared) outside, agents can be next to one another, or they can also be inside one another, like Russian dolls. Quarks inside protons inside atoms inside molecules inside cells, etc. This accounts for spatial scaling of role semantics via interfaces as well as distance. Now we can deal with the composition of larger structures from smaller structures. Euclidean spacetime, by contrast, is featureless, except maybe for holes, and points are always next to one another, forming lines, planes, and volumes. It’s only really good for discussing symmetries. But as soon as you want to describe change, you start by breaking those symmetries.

Autonomy means that agents can’t be driven from without. They can’t be forced to behave in a certain way from the outside. There’s always a formal opt out — autonomy implies "consent" or acceptance. You'll have to play along with this idea for elementary things. An electric field can't force a particle to move. Rather, it signs up to such influence by promising a certain charge. In this language, particles don't "have to" promise electric charge, so we can define this as an autonomous "choice" or acceptance to be moved by a field to be a generalized promise. There is no need to bring the anthropomorphic baggage of free will, or any such stuff, into the discussion; the term promise in useful as a generalization of autonomous semantics as we move up the scales to more sophisticated agents. In other words, Promise Theory treats properties of agents as a "decision" of the agent itself.

Agents independently promise their offer of “services” to any agents in scope, and these must similarly promise to accept certain influences or services from others in order to receive them. I choose these terms carefully, with specialized meanings for promise and service, and so on, because this language helps us to understand how to scale these ideas from the very small to the very large–particularly in a human and social context, which is where I believe neuroscience lies.

Autonomous agents therefore have to have two complementary kinds of promise, called (+) and (-) promises. The (+) promises signify what agents are willing or able to offer, and the (-) what they are willing or able to receive. There is offer and accept (selection) in these bindings, mediated by processes inside the agents. In biology, for example, an exchange of protein interactions is enabled by protein manufacturing inside cells. In quantum mechanics, we would call certain bindings entanglement, with wavelike processes “inside”. Again, all this can be made rigorous.

The offer and acceptance between agents leads to a unidirectional influence. Each side promises its part independently, or autonomously.

In order to make this happen, agents need to have interior processes that keep their exterior promises. This is a scaling principle–I call it semantic scaling. In physics, we have a similar principle called the separation of dynamical scales. The very small and the very large tend to decouple across some dynamical scale boundary. Here, we can say the same thing about functional behaviours. This is what allows us to build systems of components by composition: televisions from transistors and capacitors, cells from atoms, organisms from cells, and so on.

Four essential semantics

Semantic Spacetime, then, is a model of processes, as a dynamic network of interactions, between autonomous agents, and where the agents work together by promising their behavioural semantics in this manner described above. Agents may be a part of other agents in a hierarchy of scales.

Simple networks just have passive nodes and links. Agent networks, on the other hand, can make their active agents either the same or fully unique in space and time. Promise bindings between them make the links different too, by the specifics of the promises they make.

A network isn't just about being next door to another like a fixed map. It's about what each agent can do for potential neighbours at each moment of time. By studying such networks (see the references at the end) we find that their semantics have four meta-types of binding — but all built on spacetime notions. For all the specialized functions we can imagine, causality seems to constrain all bindings to have one of these semantic spacetime types in the table below.

The four semantic meta types behind all services and reasoning are spacetime concepts.

If we compare this to Euclidean space, where one only has “next to” as a relationship, this model is richer and incorporates processes, as well fields of matter and energy as an integral part. So semantic spacetime can capture more of what goes on than Euclidean space. Because it naturally incorporates “matter” and structure. There is no passive packaging.

Cognitive processes in neuroscience or machines

So coming back to neuroscience, we can discuss the role of space and time in processes of cognition and reasoning. How does this agent model help us to understand sensory causation and cognition? Spacetime enters into the way we chop up of exterior inputs in a stream (signal processing) and in the interior representation of concepts extracted from the input.

On a superficial level, the agent model creates a natural architecture for cognition — with an independent inside that can individually interpret, cache, and understand information from an outside. From a Euclidean point of view, it’s not very obvious what spacetime has to do with sensory experience, sampling, or information processing. However, by drawing a ring around a region, which contains memory and processing capability, we can see how to make the cyclic circuitry for sampling, processing, and safely storing data received in a stream from the outside, with private context.

This is a very simple idea, with huge consequences. Instead of trying to describe it, in the abstract, let me rather show you one example of its implication that we can all relate to: understanding a stream of language.

A toy language model

Natural Language Processing (NLP) is a spacetime process, that we normally think of as a linear symbolic process in computer science and linguistics. We would like to see how an agent could get from distinguishing individually written symbols (or spoken phonemes) to forming and even relating full bodied reusable concepts of general significance. The trick is to recognize that input expresses one story in network of space and time and can be remembered as a code written in another network of space and time.

In the usual discussions about NLP one imagines that the meaning of text is intrinsic in the syntax of the symbols. That’s a form of cheating, using dictionary and grammatical knowledge, as a shortcut to establishing the meanings that took millennia to evolve. It assumes we already know what meaning is — a different problem, in my view, to divining meaning from inputs. Chomsky’s idea that there is some universal grammar for finding meaning might not be complete nonsense–-not in the literal sense, but in terms of the apparatus for resolving spacetime structures. A grammar is basically a set of boundary conditions on a process, to be resolved by the connecting dynamics. Grammars are actually multi dimensional graphs.

A cognitive agent perceives the world as a series of frames, containing patterns that it may be able to process (promise to accept). Each frame is a snapshot of a running context, traced by the interior memory of the agent.

In most approaches to Natural Language Processing, one uses Big Data to train an system to understand meanings that have already been established in the human world through centuries of evolution. That's not what I want to discuss here. Rather, let's start from no prior training and see what we can boostrap from nothing to surmise meanings based entirely on patterns in space and time, based on deliberately limited Small Data and the four spacetime semantics. The goal is to see how linguistic concepts could emerge based entirely on pattern recognition of spacetime processes. I'll cheat a little to use our knowledge of language to give a God's-eye view of how concept emergence works.

Input-fractionation method

We start with a sensory apparatus, which is basically a pattern discriminator for symbols, sampled in frames (see figure above). The division into read frames is the first role played by spacetime, which decides a scale on the perception of an agent. We transmute information at this dynamic scale into a network representation across several semantic scales.

For language, the dynamical scale is a natural sampling rate determined by the characteristic patterns of the input spacetime: the alphabet. As sensory phenomena, alphabetic symbols may have spatial representations in two or sometimes more dimensions, which complicates the business of transducing a stream of input from one form of sample into another, even though (mathematically) we can compress each stream into one dimensional sequences by relabelling.

Agents have to deal with multiple representations at different scales in order to recognize their environments, so we can use a sliding window of fragments with different word lengths to parse text and try to turn it into emergent concepts. We want to distinguish small changes. We do this by breaking up textual input into small pieces–just as we might do in a DNA analysis.

  • In English text, a word is the smallest pattern we distinguish.
  • We look for all the fragments (n-grams) of input in a document of different word sizes and count these to determine their importance.
  • If a symbol is only used once, it’s probably spurious and we can probably ignore it.
  • If a symbol is used repeatedly and continuously, it’s probably white space and we can probably ignore it.
  • If it's used repeatedly but not too often, with an irregular pattern, it's surprising and possibly meaningful.

Based on this, let’s make a hypothesis, and define the importance of an input fragment (as a measure of its effective intentionality), proportional to the length of the utterance (work done) in the input space, and inversely proportional to its frequency. This ratio has the dimensions of “action”. This can be counted in real time (without pre-training).

We can form a graph of the patterns based on a model of their semantics. A complete sentence is an episodic memory event. We pick out only those sentences that are ranked high by their intentionality and connect them in sequence (FOLLOWS), surrounded by a cloud of components (CONTAINS, EXPRESSES). The resulting structure is a hierarchy in which small fragments act as hubs to join together longer episodic memories (sentences).

The hierarchy of pattern n-gram agents in a memory representation, on the inside of a cognitive agent!

An input fragment of n words (or a symbol of n strokes) is called an n-gram. The scaling hierarchy is like this:

characters/strokes → words → phrases (n-grams) → themes → sentences(events) → stories → concepts

Notice that these are timelike (sequential) collections of events at different scales of the input stream. In principle, any of these n-grams can be repeated, but the longer the sequence, the less likely it is to be repeated. Specific meanings refer to specific occurrences. Too much repetition is probably padding. Indeed, language studies show that the most frequently repeated words are the least important.

The fact that a sensory stream is sequential doesn't mean its memory representation should be. Sequences are useful for encoding with causal order, but concepts may form more complex networks, with and without order. We recognize the same principle in biology with DNA.

Amino acids → Codons → DNA/RNA → proteins → processes → tissue

Smaller components are reused (by composition) forming larger things. Composition makes things potentially more unique. And uniqueness is related to meaning. There’s a reason we make statues and monuments as spikes that stand out against a background.

After fractionation of an input stream, by a cognitive agent, the agent has a collection of small semantic fragments that we can remember (count) and use to rank longer fragments according to their intentionality (defined above).

We might choose to focus on sentence events of high intentionality, because sentences are the preferred scale for sampling and “thinking” (that’s really a definition of a sentence). So there is a preferred scale — the input may have some scale invariant properties on the fragment level (it does) but this breaks down on the scale of about three words, at which a preferred scale arises. It seems to arise because of the timelike sequence, with bindings left and right, forming molecules of order three. Anything longer than 6 doesn’t really repeat ever.

From reference 3. Small fragments are random and form a power law without a natural scale.
From reference 3. Longer fragments are log-normal, i.e. sparsely occurring in the input stream. Their rarity makes them more significant and thus of greater importance. This longitudinal distribution in time leads to meaning.

We can use intentionality to pick out the significant sentence episodes from documents, and use our own cheating judgement to see how well we do. It’s a bit like a Turing test for meaning. But the key thing here is that no dictionary of meanings is ever used to identify these. We can judge them for significance by cheating with out human inside knowledge, but they arise solely from their spacetime process invariance. The results are usually surprisingly good.

A selection of highly intentional sentences from a speech by President Obama.

Memory spacetime

If we now use the four spacetime semantics above to make a graph of the stream, it forms a hierarchy of cellular structures, due to the identifications of space and time. This forms a new spacetime, with different agents and relationships, based on the fractionated n-grams.

We can use the sensory frames as a context combined with a quick assessment of the sensory process (like a limbic response) to label the episode fragments using the running state of the agent as a contextual memory address. When we want to recall something, we recreate some or all of that context to find a path through the graph.

The hierarchy of patterns starts with proper names on the microscopic scale and is composed by network relationships of n-grams into larger concepts by co-activation across episodes.

One of the semantic four is NEARNESS, distance, or similarity (not a Euclidean distance, but a network channel width). What makes episodes similar? Clearly, if the fragments associated with an episode overlap, then the greater the overlap the more similarity. This is the bio-informatic method. Contextual assessment also plays a role. So we might imagine that meaning arises in the overlaps — which are invariants of a sort (think interferometry).

This can be done directly on the stream too if we have enough memory, to extract timelike or longitudinal invariants from irregularly repeated patterns. The same result can be computed more easily by breaking up into fragments as an easier way to compare by counting.

Distance between patterns can be measured by n-gram component overlaps, as in bio-informatics. When the fragment clouds of important episodes overlap, there is a repeated conceptual content in the selected episodes. We define these as bootstrapped prototype concepts.

Some examples

We can try the approach on a few books according to this procedure. Scanning the texts, we can look at the spacetime fragments:

  • The fragments (ngrams of order 1–6).
  • The most important / most intentional sentences (micro-episodes) — statistical characterization or longitudinal invariance in the input spacetime.
  • The fragment overlaps across episodes accumulated over larger stories (macro-episodes). These require a separate process, independent of input, in which the agent grooms and correlates its episodic memories — a kind of dreaming?

Let’s see what these look like. Scanning the first book Bede's Ecclesiastical History of Britain, which stands as an experiential sensory stream. Here's a short excerpt of the invariant n-grams:

Key n-grams from the History of England, but The Venerable Bead. Smaller fragments are proper names, and longer fragments are more meaningful conceptual themes.

We see how plausible themes pop out straight away in the form of process locally persistent features or quasi-invariants of the sampling process.

If we now cheat, from our god's eye view, by using our evolved human understanding of the intended meanings and synonyms, then we can see a different level of significance by collecting similar concepts together into context hub component overlaps by NEARNESS (see figure above). Using my ad hoc characterizations of the fragment sets:

High level themes from Bede's history, reinterpreted by cheating, involves only three high level concepts, and some spurious noise. We see that the patterns cluster around larger concepts.

Scanning Moby Dick, a similar context hub aggregation leads to more separate regions. Again, these are summarized in ad hoc terminology by me from the groups of n-grams. The largest cluster is formed from terms expressing purely emotional content.

Moby Dick, high level concepts reinterpreted by cheating about the meanings. We see that the patterns cluster around larger concepts. The novel appears to be about primal emotions far more than seafaring or whaling.

In Darwin’s Origin of Species, a more focused book, with a clearer intended message, the clusters have these dominant signatures:

Darwin's Origin of Species, high level concepts reinterpreted by cheating about the meanings. We see that the patterns cluster around larger concepts.

Now we are nearing issues of greater generality, that we might call concepts. They are not just themes associated with this episode, but are general enough to be remembered as concepts for later referral.

The formation of these clusters is an impartial matter, but the insights about their significance are found by "cheating" because the generalized meaning of the clusters is inferred from my own understanding of what a collection of frgaments represents. It doesn’t emerge solely from the data.

However, this is an important insight too: settling on an agreed meaning requires additional processing outside the scope of a single process — on a larger historical and societal scale. Integrating such independent agent processes is surely what brains have evolved to do well.

As a quick counter example, sometimes the process results in remarkably little stuff of lasting value. Scanning the novel Slogans, of 500 pages, which darts from scene to scene and topic to topic, the dominant themes, scaled over the extent of the book, are actually fewer than a much shorter focused text on a specific subject! The story is mostly “blah blah blah” on a large scale. Only these groups remain:

This, however, is certainly one valid response to the novel. If we focus on shorter sections, then more things stand out against their local background. This is the dilution effect of scale on cognition. Again, spacetime processes and scales are the determinant.

Aftershocks and stigmergic processing

Let's comment briefly on the idea that some meaning emerges in a larger social process, between agents. Moby Dick seems to be the only well known book in the United States, at least if you believe television and the movies. we know it as the book about Ahab's obsession with the white whale. It’s a long book, so there are many interesting phrases and moments, yet it’s interesting how whales play no obvious role on the broader conceptual level (clusters shown above). On the smaller thematic level of phrase fragments, “whale” does appear as a pattern many times, i.e. a proper name subject, with longitudinal persistence.

A few n-gram frequencies from Moby Dick. The famous phrase about the white whale doesn't jump out statistically, but has captured the imagination and poetry of readers outside the scope of the book itself. This illustrates how cognition doesn't end with data.

And even more interesting, the famous phrases "Call me Ishmael" and “the white whale” are less prevalent than terms like "whaling" and other emotional n-grams concerning faith, and yet it’s the daemon white whale that has survived in the popular consciousness.

I think this shows that human cognition is not simply what happens in the moment of sensory experience by a single agent. Two things happen after humans have experienced an episode:

  • We groom and integrate the concepts into a large domain of experiences, and some of the ideas will resonate between different experiences and be amplified or demoted in importance. If we want to reproduce that in a machine learning scenario, we need processes that coactivate memories and alter their importance through recollection. Perhaps this is artificial dreaming?
  • We further discuss the ideas we've perceived with others in the society around us, and therefore relive the remembered episodes through shared storytelling. This repeats, enhances, and biases fragments of the stories in new ways — by feeding back new narratives with repeated elements. In other words, a key component of our memories is not within us, but is stigmergic. We’re basically large insects!

In the final instance, a memory representation, within a cognitive agent, settles into a graph based on the four semantic types. It's structural adherence to these types, makes it effecient at finding memories that are semantically triggered through sensory contexts, real or imagined.

In the end, a novel is encoded as a graph from which we can reconstruct portions of the story from various meanings and concepts, and indexed by contextual fragments. The spacetime structure makes recovering memory much easier, based on the 4 semantics. No linguistic knowledge is required, and no logic is involved in reasoning — only network linkage of the four types.

There is a separation of two kinds of information in these structures: short term context and long term episodic story lines. The contextual information is the natural way to index the long term structures. So when a similar context arises later, it will tend to seek out episodic memories from longer term episodes which may then be recalled and combined in some way. That’s essentially how Machine Learning magic tricks are performed in our contemporary version of AI–where no one really understands how or why the tricks work. Neural networks are spacetime networks. This could offer an insight into how those tricks end up working, and where they are likely to fail.

Summarizing agent semantic spacetime

Wrapping up, what this very brief sketch hopefully illustrates is how a simple change in our thinking about how to look at spacetime provides a fruitful way of seeing processes through the lens of the most simple observable distinctions around an agent. I've tried to show how cognition can be viewed as the transmutation of one hierarchical spacetime process into another — through the identification of scales relative to the sampling cycle of the agent.

A real time (small data, no prior training) approach to cognition uses only network spacetime to bootstrap concepts from repeated patterns. In the example of text analysis, first we put a ring around key sentences in a stream based on a measure intentionality — fragment length divided by frequency. We connect the selected sentence events up in a ordered timeline. Then we smash them up into n-grams and link this cloud of fragments to the sentence using the containment and expression (memory) semantics. The clusters of each event overlap and these composite combinations of fragments indicate commonality of composition, like a new quasi-invariant in semantic spacetime. We identify these as the "DNA" of primordial concepts on a larger scale.

The formation of concepts appears, from these ideas, to be a natural scaling process, starting from dynamical origins, to identify spacetime invariants as proper names that stand for their semantics. Concepts and meanings are bolstered by post processing and popular resonance when social processes amplify their importance. Thus a single agent (e.g. a brain) is not the beginning and end of cognition — that spans a hierarchy of scales and cycles both within and without.

The cyclical sampling and resampling processes are pushed into focus, across several scales, but also the fragmentation, accumulation, and grooming processes for classifying symbolic information and separating scales. Perhaps these are associated with brain waves. We are not so much a sum of our experiences as we are a sum of our interior-exterior resonances.

There’s a lot still to be done to understand all this in detail, but I like the simplicity (the parsimony) of the idea. In the beginning of evolution, space and time is all there was. If we can bootstrap from these ideas, then we’ll have identified the guide-rails for cognition in that story.

Some literature:

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

Responses (3)