Searching in Graphs, Artificial Reasoning, and Quantum Loop Corrections with Semantics Spacetime

Using the SStorytime tools to demonstrate directed process dynamics

Mark Burgess
19 min readMar 22, 2025

“Graph” is the mathematical term for a network, or a collection of places or things called nodes (vertices) joined together by links (edges) of some kind. Graphs can represent anything, real or imaginary. The Internet is our most famous network or graph, but graphs are hidden in plain sight everywhere. Graphs occur in networking, quantum processes, and in Artificial Intelligence and are usually associated with searching for something by a sequence of process steps. We can illustrate and learn from the different cases to apply the knowledge across the board.

A graph is an efficient representation of a process, when it consists of a finite set of distinguishable states. When the number of states becomes infinite, the graph turns into a continuous geometry: a curve or a surface, etc, and a graph turns into a method of Green functions.

Graphical methods are important when searching through process spaces, because every point in a trajectory has a direct pointer to what can come next. A graph is itself a process! As one process mimicking or representing another, graphs become a lingua franca for process descriptions. Unlike plainer archives, no one has to run around every box in a warehouse to see what’s inside during a search, because every box has a pointer to where you should go next. In complex cases, different kinds of links can point to different possibilities or dimensions. A path through a network of events with arrows thus represents a flow-chart of cause-effect that links up states in a form of storytelling, or reasoning, which is at the heart of computation.

We say that the semantics of a graph describe what rules it satisfies in order to describe the process concerned. The term semantics originates in linguistics, but has been adopted into formal language too, to refer to the details expressed in the steps and operations of a proposition or action. As we’ll see below, by adjusting the semantics of the nodes and links, we can solve some very interesting problems, spanning a multitude of applications, in a similar way. In particular, the way we describe the “functional semantics“ or behaviours of spacetime elements, at different scales, leads to many prominent descriptions of nature: from subatomic particles to biological cells and organisms.

Scale is always an important aspect of process semantics, much neglected in information science. In computer science, graph relationships are usually (unimaginatively) represented as something like A “is a friend of” B, etc. Often the relationships are assumed to be directionless, or mutual. In reality, graphs have far more uses than merely recording interpersonal relations. Moreover, most relationships have a flow or preferred direction. Both real and conceptual spacetime can be polarized on different levels, and this is where the power of Semantic Spacetime lies.

Searching possibility space

When we search a space for what is possible, there are three main cases, or process boundary conditions:

  1. You know where you are, but not where you’re going (fixed starting point, open ended).
  2. You know where you’re going, but not where to start from (converging on a target).
  3. You decide where you’re starting from and where you’re going (You plan a definite transit from start to finish, from emission to absorption by a node).

These are called retarded, advanced, and mixed or Feynman boundary conditions in physics. We can see them illustrated in the figure below, but they apply everywhere.

Figure 1: three graph boundary conditions

The basic point is that paths tell stories, describe possible futures or past histories, and so have a special place in reasoning. Since each path tells a story, one obvious issue of the (iii) kind is path solving: how can I get from A to B? What are my options?

Path solving is what your GPS finder does, for instance. Suppose there’s an open Euclidean space between some start and the end locations, then there’s an infinity of possible paths (see figure below), unless something is blocking the way as in the figure below (a familiar scenario for physicists or shoppers trying to enter a store). You can’t drive through a house or a wall, so there are additional constraints to solve for. For a graph, there is only a finite number.

Figure 2: Getting from A to B with constrained paths

Most networks involving roads or knowledge resources have only a few possibilities connecting certain dots. The same thing is true in basic physics, where famous experiments from photon absorption to the double-slit experiment are used to expose path-like behaviours in subatomic processes. Even Brownian motion is explained by a graph of collision processes. The continuum space of possibilities is reduced to connected dots in practice.

Let's use this picture as a graph that could be typical of scenarios we might be interested in searching. How many ways can we get from A to B?

Figure 3: Figure 2 from the perspective of a graph

As one searches by stepping from node to node, in a graph, traversing each arrow, the first of the methods (i) in Figure 1 traces out a set of nodes the equivalent of a uniform “ball” of given radius in all directions, for a certain number of hops from the starting node. The second (ii) is like falling into a well, which one might typically represent as a potential surface in a continuum model; here, graphs are like vector fields. Unlike a ball in Euclidean space, a discrete node ball in a graph may seem like an irregular and deformed cluster of nodes, even though it satisfies a strict mathematical property. The ball construction is nevertheless useful as the way one defines the spacetime “dimension” of a graph for a so-called causal set, by comparing the number of nodes within a radius R to the formula for a sphere in n dimensional Euclidean space.

Path solving strategies

Let’s dwell on this problem of finding solutions from A to B in a general way. Here is one general method based on spacetime processes that offers a useful insight into both the computational reasoning and related dynamical topics like the now infamous quantum loop expansion, The loop expansion breaks down the fluctuating process that “dresses” the charges and masses of elementary particles with fluctuations, thus transforming our view of them from static billiard balls into active squirming agents. Of course, we start with the most elementary form of the problem.

One way to solve a search problem would be to take the brute force approach: we start from A and just keep on branching out in all directions to search all nodes in the ball around A until we find B (assuming that B is distinguishable and identifiable somehow). We spread out, like they do in the movies, following every possible path with increasing radius until we find B. This is a possibility, but one problem with it is that we don’t even know if we can get to B from A, or in which direction to go, for that matter. Also, it may be slower than necessary, because the search space (the set of nodes to explore) grows exponentially in the worst case.

A more efficient way to search is to start from both A and B at the same time, thus breaking the symmetry of the starting point, and sending out waves from both A and B to probe their neighbourhoods (see figure 4 below, where they are shown in green and blue). Each acts as a beacon or guide rail for the other. If the outgoing blue/green wavefronts meet at any point then we know that there is a path between them, and in which direction. We can use this to find all possible paths from A to B and vice versa. To construct the path, we then just need to splice the journey from A to the reflection (or adjoint) of the journey from B so that the arrows all match up.

Figure 4: causality cones from A and B spreading out in wavefronts

This is straightforward in the simplest case of homogeneous arrows, and becomes increasingly complicated if there are many different types of arrow. So, let’s take the simplest case and think of the arrows as all being homogeneous, and the space in between as being some known graph of nodes or points joined together with pre-specified links, like the figure below.

Figure 5: waves on a graph

Our search method is now to emit guiding waves, depicted by the green and blue coloured arrows above, from both A and B at the same time. Moreover (importantly) we assume the arrows are all unambiguously uni-directional, so we can just follow the arrows from A to B. If the wave fronts (i.e. the outermost circumference nodes) coincide for the green and blue processes, then we’ve found a path. There are two causally independent processes. The green process, starting from A, doesn’t know anything about the blue process (and vice versa), but when they meet they co-activate one another like “neurons that fire together wire together”.

This is the method.

Semantic Spacetime and the SSTorytime software

The solution can be seen with the help of a graph computer solver. As part of the SSTorytime project library, I’ve been writing software methods to build semantic spacetime graphs from language models. We can model such a graph using N4L to describe the network below. We need to give names to the nodes, so let’s define and label the graph like this:

Figure 6: a fully labelled version of our test graph. Now the path from A to B is A1 to B6

Now we want to search for paths from A1 to B6. In N4L language, we can choose to write it minimally like this:

Figure 7: the graph definition in N4L language

Details aside, this can now be uploaded into a data model and we can search it.

First method: classical deterministic search

Once again stripping the problem down to the simplest case, we look at the following algorithm. Following the figure above, we increase the depth or radius, alternately from left and right, spreading out along the arrows. At each step, we sample the orthogonal sections or “spacelike hypersurfaces” along the outward cones from A and B to find the outermost circumferences, i.e. the set of nodes at the edge. We do this from A and B (left and right), for forward and backward arrows respectively:

Figure 8: the basic solver approach as taken from the SSToryline code

We are searching the graph deterministically, which means always in the direction of the arrows, and we advance our paths without any probability or randomness. The radius outwards from A and B take turns to increase by 1. If we increased both at the same time, the wave fronts could pass over each other, depending on the path lengths–this is an issue we have to deal with for a discrete graph.

Where these circumferential wavefront sets overlap, we’ve found a common point belonging to a complete path, and we simply need to splice the left path to the reflected right (adjoint) path at this point:

Figure 9: splicing the left-right paths

When nature solves problems like this, it tends to do so by leaving a stigmergic trail to mark out and “light up the paths, as ants do with pheremones, and bodily tissue does using signalling molecules like heat shock proteins and hormones. Here, we are using the computer's memory to map the paths.

The demo program search_contrawave.go shows how the paths are formed. Remember, the orientation of the search aligns with the arrows, forwards and backwards. In the figure below, as we increase the radius from left and right, we assemble the nodes in the circumference of each and see if they have any nodes in common, e.g. after one left hop the left node depth 2 has wavefront {A2,A3} on the left and none on the right (counting includes the start node).

Figure 10: at each step, print the circumferential sets of nodes belonging to the wavefront. After one move on the left, we go from A1 to A2,A3.

We keep on increasing like this, until we are four agents away from the ends we start to reach overlap in nodes S1 and S2, which lie in between:

Figure 11: the moment at which left/right waves first touch.

For each touch point, we take the path that led up to it from each side and join the left to the adjoint of the right–where the adjoint means reversing the meaning and direction of the arrows. Finally, we can match up the paths to yield the following uni-directional paths from end to end. These are the five solutions that join A to B.

Figure 12: summary of all the spliced paths from A1 to B6
Figure 6: reprise

One last thing is worth mentioning for the deterministic case: if we increase the radii further, the waves have already passed through each other and there can be no solution matching the circumferential sets with a longer path length, as shown below.

Figure 13: After the waves have crossed over, there are no overlaps anymore

This is what we ‘d expect from a simple monotonic approach. In fact, it’s not the only possibility if we are willing to entertain processes with all possible directions–with and against the flow.

A second method: self-adjoint deterministic search

Unless you’re trained in the dark arts of quantum theory, the foregoing simplest case might seem to be the only possibility. But, if you know something about quantum mechanics, you might have already noticed there is something a bit familiar with the setup, in starting to search from both ends at the same. It’s a non-local idea, very familiar to quantum mechanics: the transition function from end to end is a product of a search wave 𝜓 AND its adjoint 𝜓†. A quantum transition is written

Figure 14: A quantum transition function has the two wave structure from start to adjoint end.

The first algorithm above is, in fact, enough to model many of the features of the transitions for a Schrödinger type quantum system as long as one allows for multiple independent paths happening simultaneously. Essentially, a system in a quantum superposition is a parallel processing system, in computer terminology. However, when we reach the level of a relativistic quantum theory, there is another intriguing possibility from the temporal symmetry.

Superposing all possible solutions, with reversibility, allows transitions to backstep in time in the middle of a path. This is a quantum loop. What this means in a continuum model of spacetime has always been hard to understand, because the time variables in the continuum equations muddle several different interpretations of time. In fact, if a process reverses along the direction of its travel, then it appears to count as negative time to the first. So, (from the perspective starting point A1) the two waves in the search algorithm seem to move in opposite directions. The green wave moves forwards in time, the blue wave starts in the future at B6 and moves backwards in time to meet the first, and the effect is that the whole wave joins up to give a positive time positive particle for the whole transition.

The new possibility here is that reversible paths don’t have to follow arrows in their forward direction only. They can also indulge travelling backwards along arrows they haven’t travelled before. What happens if the wave backtracks along its path? If some of the links in the graph are followed in the reverse direction (like driving the wrong way down a one way street) then it’s formally like going backwards in time, or being an anti-particle (or at least a contrarian!). The consequence for quantum theory is that it allows small dances like fluctuating corrections that backtrack and go around loops, like doing a twirl before continuing on to their designated end point. The same thing is possible in our computer model here, if we allow the causality cones to follow forwards and backwards in time. The extent to which this happens in our model depends on how many reverse path possibilities happen to be available as the wave moves through the graph, and therefore also the rates of the wavefronts’ motions.

Backtracking might not seem to make much sense for the simple example here, but it will prove to be interesting when we apply this to artificial reasoning below.

To show this on the computer, we relax the strict forwards and backwards alignment of the causality cone, and allow paths to follow arrows backwards too, from either end.

Figrue 15: we adjust the fwd and bwd path constraint to allow any direction.

The result leads to some strange (but perfectly reasonable) paths. I split them into two parts below called “tree level” and “loop level”, which is quantum jargon for tree graphs (or DAGs, i.e. Directed Acyclic Graphs), and diagrams that contain backtracking.

Figure 16: the path solutions with backtracking, up to co-depth 5. More solutions may exist at longer depths..

The tree or DAG paths are the same as before for the purely deterministic forward time case. The loop diagrams are additional paths that take the less trodden path, but add new possibilities for getting from A1 to B6.

What’s interesting about the loop paths is that they are longer than they need to be in general, because they may detours. They cost more to follow, and–moreover–the longer path length introduces a time delay into the proper time of the search arrival process. When the classical wavefronts have already passed each other, the waves actually can still join up into possible search pathways, because some paths skipped a beat. On a statistical level, if one imagines searching for all possibilities at the same time, these extra “loop corrections” lead to a spreading out of the arrival times, or a spacetime uncertainty for the search process.

It’s well known that, in quantum theory, Feynman’s famous path integral and the Schwinger Effective Action are both generators of loop graphs that represent processes like these. They can be classified into tree level or classical processes and then with “loop quantum corrections” that include additional detours and add time and energy to the processes. Here, we see that a graph search has a similar behaviour, due to the combination of a sum-over-paths satisfying the search constraints and combined with the strategy of allowing increasing levels of acausal dilly-dallying through detours along the way. All this is beautiful to see illustrated in a simple but very quantum way, but more interesting still is what this implies for artificial reasoning.

Reasoning is working towards an outcome

The path integral idea entered into the world of physics through quantum mechanics and the path integral formulation, due to Feynman and Wheeler. The non-determinism and non-locality can be understood as parallel processes seeking to avoid local minima of work for the motion to guide “particles” to a destination through the available paths. The search problem is about breaking down single possibilities into ensembles that run in parallel to seek out every possible option at once.

It turns out that other kinds of non-intelligent but parallel processes are able to do this too. Brownian motion allows dust particles to swirl around and avoid one another, so that–on an ensemble level–they search the space of states in their partition function in a way analogous to the quantum action principle. Both these are actually generating functions for graphs.

Jumping from the ridiculously low-level to the sublime epoch-spanning scale, we could also say that evolution does this dance too. A species that mutates into something, which is blocked from going forwards, can (within the scope of the entire process) still backtrack by going back to the previous species mutation and either waiting for the right moment, or finding a different path. You might think that these cases are nothing like one another, perhaps even anthropomorphizing an idea about reasoning. What one can't reject is the existence of process pathways or causal histories. After all, evolution is about statistical populations and superpositions of massively parallel processes. In fact, most processes in nature are statistical in nature, and “nature finds a way” to avoid obstacles by backtracking. The key to these solutions lies in the topic of causality.

What one needs to accomplish this backtracking is memory processes. These are also known as non-Markovian processes in stochastic systems. All of Newtonian ballistics is made up of memoryless Markov processes, but quantum processes (like search and reasoning processes) need a kind of internal memory to be understood.

There are more kinds of search than these two, including random Brownian motion or Monte Carlo methods, but I won’t go into these here.

Reasoning without logic

So we arrive at reasoning. Traditionally, science has come to associate reasoning with logic, but that’s a misleading accident of history. Logic is just one highly constrained form of storytelling that encourages precision. Sometimes its constraints are too stringent and there are no possible paths that satisfy all of its strictures. Logic is more like the first dynamical path solution above. Yet, as we delve into the realm of fuzzy logics and more human-like reasoning, with Artificial reasoning systems, the second of our methods begins to make more sense too.

One way to visualize reasoning is to chain together a sequence of decisions forming a “story” or narrative structure involving propositions and relationships between them. These are not necessarily strictly logical, as we understand the term, but they are causal. Within the dark secrets the physicists have understood is the idea that chains of events occur within the realm of causality. It is not always easy to identify, especially with incomplete information (see complaints by statisticians).

Artificial Reasoning has become a broader and more interesting topic that’s currently dominated by an effusion for Large Language Models. No one claims much to understand reasoning in LLMs, although they are apt to make extensive use of it. How do they work? Essentially, the neural network architectures, with many dimensional connections, capture probabilities for path transitions. One ends up seeking recurrent pathways to generate causal output, and then one can make inferences.

In semantic spacetime, this is somewhat easier to understand. Look at the graph fragment in the figure below. It shows some notes written perhaps by an investigator, seeking to make a connection between the facts: there is a polluting activity causing trouble, but no one knows exactly who is responsible. An investigator collects information around the troubled spot, and it forms a graph with several kinds of links.

Figure 17: a graph fragment with more elaborate semantics

By performing a purely dynamic semantic spacetime search around the node topic of pollution, without understanding the meanings, this fragment emerges. The investigator could infer that A could be legally responsible for the pollution, even though all of the links have different semantics and don't directly claim that. When the arrows are traversed in any direction they can be combined with adjoint meanings using a simple set of rules in semantic spacetime. Those rules tell a story from which one can infer an effective causal connection between A and D, about who causes the pollution:

  • “Owns” is a kind of spatial encapsulation (CONTAINS) which says that A is legally responsible for B
  • “Has location” is an invariant attribute (PROPERTY) that correlates the pollution with entity B, by virtue of a mutual dependence on C (which happens to be a place).
  • “Causes” is obviously causation (LEADSTO), so there is now a plausible correlation between A and D involving the offending pollution.

In other words, A owns B, which is located in the same place C as the pollution appears, so A and D are plausibly related somehow. This requires further investigation!

The path can be solved for in essentially the same way as the double slit paths above. When we include the underlying process semantics that underpin the more arbitrary linguistic (at least language dependent) text of the relationship. Processes are universal in nature, so when we follow dynamical principles of causation, meanings write themselves.

So, a robot walks into a bar

Two robots approach one another (perhaps walk into a bar, or simply into one another) and come to a complete halt as they come in each other’s paths. As they try to get out of each other’s way, they follow the same deterministic patterns and continue to block one another, while dancing.

Alternatively, one robot walks into another kind of bar and gets injured. Instead, it could backtrack and wait for the bar to be raised, whence it is free to continue on its logical way. If one of the robots took a step back, or took a sidestep diversion to lengthen its path to its goal, the two could simply “flow past” the obstruction, like the wavefronts in the examples above.

How might one robot know to actually wait its turn, back up (retreating away from its intended destination) and get out of the way of another? As humans, we do this all the time. It has to do with the choice of strategy in selecting a path forwards. Monotonicity, as practiced by simple logical systems, is sometimes a path towards deadlock. But doesn’t it require intelligence to get out of a deadlock? Not at all. A backtracking algorithm that unwinds an intended trajectory to find a new solution could solve this in a "smart"way, without any intelligence.

If agents are strictly trying to minimize their effort (least action), this could not work, because the extra looping or waiting costs slightly more. But if we allow them to borrow some goodwill or perhaps some time from the bank, by not always taking the shortest path then the likelihood of such deadlocks in local minima becomes quite insignificant. The gains from not getting stuck may outweigh the cost of waiting, so strict shortest path or least work algorithms may be unhelpful.

Self-driving systems will indulge these kinds of algorithms. And, while the training of Artificial Neural Networks and Large Language Models deal with many implicit pathways through networks, they are never explicitly exposed except at the end points. The solutions used to generate their responses are basically search paths through these very high dimensional training graphs.

A quantum of solace

It might seem as though this is the end of the story, but there’s a twist that on first blush seems a little crazy, but which turns out to be what actually happens in processes in the natural world at all scales, starting with the quantum! You might now begin to see the connection between semantics and direction, as special kinds of arrow imply special and distinct kinds of message. In physics, different arrows are usually propagators for different kinds of influence, called quantum fields. In biology, on the other hand, the same kind of promises are called donor and receptor promises and the arrows are molecular fragments.

Several authors have written about how one doesn’t need the specific mathematics of Hilbert spaces and wavefunctions to describe quantum systems: it is possible to use conditional probabilities as the arrows. However, as with all representation theory, one chooses what speaks to us the most. By choosing a wavefunction approach to model dynamics, one has arrows that correspond more closely to well known properties like momentum and velocity.

This is perhaps why QM is a better representation for discussing transitions and hops between states in low level systems. The directionality is built into dynamics on a basic level, in the form of momenta (which keeps us in the ballistic language of Newton, that we all grew up with), but that isn’t obvious for probabilities. Conditional probabilities are a clumsier way of writing transitions that are less obviously directional. Interestingly, Bayesian representations of directed probabilities are basically what modern AI techniques build on. One wonders if a quantum inspired formulation could work to someone’s advantage–with antiparticles and all. After all, Isaac Asimov invented the positronic brain!

Promise Theory is a graphical theory of interacting agents. One way to construct a model of space and time is to think of space as a collection of interacting agents. This is a bit like the theory of Causal Sets, but with extra details that resemble the “hidden dimensional” models of Kaluza-Klein and string theory, and Semantic Spacetime comes directly from that. The dynamical approach to reasoning still has much to teach us!

--

--

Mark Burgess
Mark Burgess

Written by Mark Burgess

@markburgess_osl on Twitter and Instagram. Science, research, technology advisor and author - see Http://markburgess.org and Https://chitek-i.org

Responses (1)